Skip to content

(Updated May 13, 2025)

I was watching popular AI-trending youtuber Matthew Bergan build a local environment for AI development the other day. And it was amazing how issues like virtual environments, package mangement and versioned dependencies were huge pain points and hurdles.

It occured to me that with all the excitement about AI and tooling, that there's common issues that folks who aren't used to the command line may run into – and be stopped dead in their tracks.

Getting a local environment off the ground with the exciting world of AI is amazing, but daunting. Especially because of the pace and the constant out-dated nature of the tutorial-sphere.

Here's where I try to demystify some of the magic and help you to build tools that "work"

Pro-tip: The Fungible Nature of Tools in Tutorials

The "tutorial-sphere" (blogs, website videos and even LLMs) is an incredible resource but its also an outdated mess.

Here's where novice software engineers sometimes trip up: command/tool substitution is not only common in software tooling, in a world of aging tutorials and rapid evolution, its required. There are many key tools that are fungible and can be substituted for each other.

The key skill to develop when starting out, is learn to recognize and substitute tutorial-maker's use A for your developer-preferred use B. Don't have preferences yet? That's ok, you can adopt mine below !

As opposed to a baking recipe, or constructing a model set, your development environment is more like a box of legos. You can reach for what you have without things turning out wrong.

Again: when you're following a tutorial or video, you may see particular tools being used, but the most important thing is to make use of what you have setup in your environment.

The commands that are interchangeable with certain other tools may have slightly different syntax.

Here are some examples of fungible tools:

python -> python3, python3.12 # python interpreter, often the version is not critical

pip -> uv, pipenv, conda or poetry # package manager used to install python packages and create isolated environments

node -> bun, deno # similar to python, but the javascript runtime

npm -> yarn, pnpm # the package manager for javascript

docker -> orbstack, colima # the docker runtime

In the back of your mind, you can see these tools and take the "recipe" and transform it, substituting for your enviroment.

The Stack

So, here's my stack:

  • MacBook running MacOS (usually trailing behind one version)
  • Terminal: Terminal.app
  • Shell: zsh
  • CLI Package Manager: [homebrew]
  • Python environment management: uv
  • Python version: 3.12 (I stay a little behind the curve)
  • IDE: VS Code
  • Docker Runtime: Orbstack
  • Python Command Line tool manager: pipx
  • Coding Copilot: aider
  • Local API Server: ngrok
  • Desktop LLM GUI: Claude.app
  • Terminal LLM CLI: llm

Getting a general setup from scratch: - Open Terminal - Install homebrew - Install zsh - Set zsh as default shell - Add user-wide environment variables to ~/.zshenv (great for things like your default OpenAI API key) - Create a projects folder (for me: ~/projects/) - Install system wide tools: - Via homebrew: - uv - pipx - brew install pipx uv - Via homebrew casks: - brew install --cask visual-studio-code orbstack ngrok - orbstack - vscode - ngrok - Via pipx: - pipx install llm - pipx install aider-chat

The steps for setting up a new project:

  • Create a new project folder: mkdir ~/projects/my-new-project
  • Setup a new project
  • cd ~/projects/my-new-project
  • uv init
  • uv add openai httpx pandas instructor (or whatever you packages need)
  • uv run python main.py # you'll need to prepend uv run in front of any python
  • OR, to enter a shell:
  • source .venv/bin/activate
  • Then, you can run python main.py

How to substitute commands from the tutorial sphere:

uv is a powerful (and blazingly fast) python package and project manager tool. However, it takes some translation for other package manager commands to be used. Here's a translation table:

Command uv pip conda poetry
Create new project uv init N/A conda create poetry new
Install package uv add <pkg> pip install <pkg> conda install <pkg> poetry add <pkg>
Install from requirements uv pip install -r requirements.txt pip install -r requirements.txt conda install --file requirements.txt poetry install
Update package uv add --upgrade <pkg> pip install --upgrade <pkg> conda update <pkg> poetry update <pkg>
Remove package uv remove <pkg> pip uninstall <pkg> conda remove <pkg> poetry remove <pkg>
List packages uv pip list pip list conda list poetry show
Run script uv run python script.py python script.py python script.py poetry run python script.py
Create venv uv venv python -m venv conda create poetry env use python
Activate venv source .venv/bin/activate source venv/bin/activate conda activate poetry shell

Pro Tip:

Are you lost on a particular command line step. Use llm, your command line copilot. You can keep the conversation going in your command line without jumping out to Chat GPT

Here's the steps:

  1. First, install and configure llm (see above)
  2. Run a command that fails or is not working as expected
  3. Highlight and copy the output
  4. Use pbpaste to pass it into llm pbpaste | llm "What's going wrong here?"
  5. And you can keep the conversation going via passing the -c flag to llm. E.g.: pbpaste | llm -c "I tried your suggestion, but it didn't work. What should I do?"