Using LLMs to Enhance My Capabilities

Dated Dec 24, 2024; last modified on Wed, 25 Dec 2024

LLMs are increasingly here to stay despite the reservations . How can I use them to enhance my capabilities?

Building complete applications, e.g., a trivia-like game with Python’s Flask web server. Makes it cheap to prototype in cases where the technology behind the prototype matters much less than the content or problem being solved.

As a tutor for new technologies/framework. Although React might be new to you, it’s not new to other people. Something that probably exists in some tutorial somewhere, e.g., blinking an LED on a Raspberry Pi.

For monotonous tasks, e.g., taking unstructured data and formatting it into a structured format, citing web pages, etc.

Make every user a “power user”, e.g., hooking up an LLM into emacs to automate transforming code snippets: C-h C-h rewrite these #defines to a dictionary of {keycode: string, …}.

As an API reference. Write a latex command \red{} that turns the text red. lldb “i r” equivalent. What does this do: find . -name '*txt' -exec bash -c ' mv $0 ${0/\ZZ_/}' {} \;.

To search for things that are hard to find. For traditional search engines, it seems like you’re trying to use keywords that the answer will have and not the question. So I know that + corresponds to __add__ but what is ~?

To solve one-off tasks. These are disposable programs where cleanliness doesn’t matter. I have a file with a JSON object on each line (like jsonl). Each line has a key called “text”, along with some other I don’t care about. Can you write a function to extract the string on each line in python? In bash every second write the memory usage and average CPU usage to a file.

To explain things. While LLMs are not as knowledgeable as the best person in the world, there are thousands/millions of people who know answers to the questions that I have, and so the LLMs probably have the answer too. What does this mean 60mW/sr @ 100mA LLMs are useful in grounding you in the rough shape and jargon of a field; use them as a starting point for building an accurate mental model of how the field works; treat LLMs as just one unreliable source of information. #knowing

Advice in On Learning applies here.

To solve tasks with known solutions. Can you help me convert this python program into a C program that does the same thing but is faster? Rewrite this function in rust. mp.Pool parallelize this with 128 jobs across the size of the file.

Skill as a code reviewer comes in handy. Can you tell when the LLM is hallucinating or inaccurate? If I don’t know Rust, how can I be confident that the program is correct?

To fix common errors. Ask LLM “How do I fix this error? [error]"; apply step-by-step solution as suggested by LLM; if it does not work, say “that didn’t work”.

LLMs have increasing context windows. For small repositories, you can dump the source in text file, copy into the context, and ask questions like “write some unit tests for foo”, “analyze the code and point out things I’ve overlooked”, etc.

  1. How I Use 'AI'. Nicholas Cartini. nicholas.carlini.com . Aug 1, 2024. Accessed Dec 24, 2024.
  2. How I Use 'AI' | Hacker News. news.ycombinator.com . Accessed Dec 24, 2024.