AutoCog — Generate Cog configuration with GPT-4

Posted April 19, 2023 by

https://github.com/andreasjansson/AutoCog/raw/main/assets/screen-recording.gif

Cog lets you create a Docker image from a machine learning repository with very little code. But wouldn’t it be better if you didn’t have to write any code? Enter AutoCog!

Inspired by tools like Auto-GPT and BabyAGI, AutoCog uses GPT-4 to not only write code, but to run and fix the code. The algorithm is roughly:

  1. Give AutoCog a machine learning repository
  2. Order the files in the repository based on how important they are to Cog
  3. Pass as many of them as the GPT-4 context window allows into GPT-4
  4. Tell GPT-4 to create a cog.yaml and predict.py file based on the files in the repository
  5. Create a cog predict shell command to run a prediction based on the generated files
  6. Run the cog predict command
  7. If it fails, diagnose the error and try to fix either cog.yaml, predict.py, or the cog predict command. Repeat from the previous step up to five times.

Human in the loop

AutoCog is pretty magical when it works. But a lot of the times it doesn’t. Sometimes it doesn’t know the exact Python package versions, and sometimes it just goes down a bad path that makes things worse at every attempt.

In those cases, you might just want to hit Ctrl-C and fix it yourself. Luckily your fix doesn’t have to be perfect either, since AutoCog has a --continue flag that picks up where you left off. Most of the time, a gentle nudge from a human is all that’s needed to help AutoCog reach a working solution.

Programming the programmer

AutoCog itself was written by a human, me. Writing a tool like this is like being a micromanager for a technically excellent programmer with poor judgement. After a while you develop empathy for GPT-4, and you break up the task into smaller subtasks that GPT-4 has a chance of achieving.

The subtasks are comprised of a prompt and some code to parse the output. For example, the prompt to order the Python files in the directory is:

Given the file paths and readme below, order them by how relevant they are for inference and in particular for building a prediction model for Replicate with Cog. Return the ordered file paths in the following format (and make sure to not include anything else than the list of file paths):
 
most_relevant.py
second_most_relevant.py
third_most_relevant.py
[...]
least_relevant.py
 
Here are the paths:
 
{paths_list}
 
End of paths. Below is the readme:
 
{readme_contents}

The reason we order the paths before we send them to GPT-4 is the length of the context window. A repository often has more Python code than the 8096 tokens GPT-4 accepts, so AutoCog truncates the input files it passes them to GPT-4.

The limited context window is one of the main hurdles when writing a tool like AutoCog. The prompts need to be constructed in a way that includes as much information as possible, without going over the limit.

Try it yourself

You can run AutoCog on your own project by installing AutoCog from PyPI. There is more documentation on how to use it on the GitHub README at andreasjansson/AutoCog.