Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Working with LLM APIs and Prompts

Deliverables due Wed Jan 29 4:45pm

You need to demonstrate that you have all tutorial code examples working using LiteLLM and OpenRouter. You also need to show that you can use the LLM CLI and Aider with your OpenRounter.ai API Key. Your working code should be submitted to your Lab01 GitHub repo. I will provide a GitHub Repo Creation link on Campuswire.

Requirements

For this lab you are going to work through the Real Python tutorial on Practical Prompt Engineering:

https://realpython.com/practical-prompt-engineering/

However, instead of using the OpenAI API and OpenAI Python library, we are going to adapt the code to use the LiteLLM API and configure it to use OpenRouter.ai. See the note below on creating a Python virtual environment before installing Python libraries on your computer. See below for setup steps.

Development APIs, Libraries, and Tools

To assist in developing your code in the class we will get familar with different LLM tools and coding assistants. For this lab you should get the following tools and libraries installed on your local machine: GitHub Access, a Python virtual environment, the LLM CLI, Aider, LiteLLM.

Secure GitHub Access

You will be developing and submitting your code and projects using GitHub. You will need to establish secure shell access to GitHub from your computer. Likely you have done this for other courses. You have two choices: ssh GitHub access or GitHub Personal Access Tokens.

Prompt: “How do I configure ssh access to GitHub for terminal access on macOS?”

Prompt: “How do I configure ssh access to GitHub for terminal access on Windows WSL Ubuntu?”

Prompt: “How do I configure GitHub with Personal Access Tokens to use from the shell?

Python Virtual Environment

I suggest that you create a Python virtual environment to contain and manage all the Python libraries we will be using. I create a global location for all my virtual environments instead of putting them into specific projects.

Prompt: “How do I create and use a Python virtual environment in a shared location such as ~/.venvs?”

OpenRouter.ai Account and API Key

We will use OpenRouter.ai to get access to a wide range of LLMs from frontier models like OpenAI ChatGPT 4o and Anthropic Claude 3.5 Sonnet as well as emerging open weight models like DeepSeek V3. Create an account and add some credits. You can start with a low amount like $5.00 to $10.00. I don’t expect you will need more than $50.00 total for the semester. We will use OpenRouter LLMs for both developing LLM applications as well as for using open source coding assistants.

https://openrouter.ai/

LLM CLI (by Simon Willison)

https://github.com/simonw/llm

You will need to install the OpenRouter.ai plugin:

https://github.com/simonw/llm-openrouter

Aider for AI Pair Programming

https://aider.chat/

I use Aider from the command line, but there are also a VS Code extensions for Aider. I’m not sure which one is the best, so I’d love feedback if you try them. Here is one that appears to be actively worked on:

https://github.com/lee88688/aider-composer

Aider also has a “–watch-files” mode that does not require a extension and can work with any IDE:

https://aider.chat/docs/install/optional.html#add-aider-to-your-ideeditor

To use an OpenRouter model by default with Aider you can set you key as described here:

https://aider.chat/docs/llms/openrouter.html

Also, you can set a global model:

$ cat ~/.aider.conf.yml
model: openrouter/anthropic/claude-3.5-sonnet

LiteLLM

Install LiteLLM in your Python virtual environment:

https://github.com/BerriAI/litellm

Here is a simple LiteLLM test program:

import os
from litellm import completion

response = completion(
            model="openrouter/anthropic/claude-3.5-sonnet",
            messages=[{"content": "What is 1 + 2?", "role": "user"}]
        )

print(response.choices[0].message.content)

Optional Extensions

If you would like to go beyond the material in the tutorial, here are some suggestions:

  1. Compare the results using different LLMs. Perhaps try a frontier model (e.g., OpenAI, Anthropic, Google) and an open weights model (e.g., Meta Llama 3.3 or DeepSeek V3). Provide a qualitative analysis of the differences.

  2. Come up with a quantitative metric for determining model quality for some of the tutorial tasks and summerize the results for two or more different models.

  3. Apply the tutorial prompt techniques to a different data set of your choosing. It could be some you find on the Internet, or something you create synthetically, possibly using LLMs.

Notes on repository contents

Please keep your repositories organized and free from any extra files. Include a README.md file that describes the purpose of the lab or project and explains how to build and run your code. Provide some examples in the README.