LM Studio and SML on PC with 8GB RAM, No GPU - downloading models, starting server and agentic editing

The emergence of llama.cpp*1 in 2023 made it possible to offload GPU processing to the CPU, making it a reality to run generative AI on a local environment. Even models that typically require a lot of VRAM can now be run on a system with as little as 8GB of RAM. The advent of LM Studio and SML (Sma…