Local LLM Setup on Windows with Ollama and LM Studio (ThinkPad / RTX A3000 GPU)
3 points
1 hour ago
| 1 comment
| github.com
| HN
appsoftware
1 hour ago
[-]
This is a walkthrough of my set up of local LLM capability on a Lenovo ThinkPad P1 Gen 4 (with a RTX A3000 6GB VRAM) graphics card, using Ollamafor CLI and VS Code Copilot chat access, and LM Studio for a GUI option.

My Lenovo ThinkPad P1 Gen 4 is coming up for 4 years old. It is a powerful workstation, and has a good, but by no means state of the art GPU in the RTX A3000. My expectation is that many developers will have a PC capable of running local LLMs as I have set up here.

See the GitHub repository for the full walk through:

https://github.com/gbro3n/local-ai/blob/main/docs/local-llm-...

Ref: https://www.appsoftware.com/blog/local-llm-setup-on-windows-...

reply