Vibe coding is the fun new LLM game sweeping the nation. Rather than understand your code, it’s just running purely off of a prompt and vibes. Get an error, paste it into the LLM and say “plz fix”, very fun.
Vibe coding has been most helpful for me in CSS and Javascript land. I don’t really understand them, don’t particularly want to, and find it much easier to say “Make this site look professional using tailwind” and the LLM just figures it out
Separately, modeling is also quite fun, but I’ve honed my syntax to a few key commands.
- Break things apart, like a Jupyter Notebook
- Look to avoid the problem of having all predictions be based in the center
- Use library X and Y (those that work on my system)
- Write tests!
Super fun. Only thing I haven’t gotten to work is the javascript game design, but here’s hoping!
I have yet to make a game out of it - for some reason, the javascript doesn’t get all of the parts right as easily as it odes for others. I’ve separately logged some tips in modeling-llm for vibe data science / vibe modeling, which is also pretty fun!
Two other notes here from Eric Zakariasson who is a current Cursor dev, notated top rules for working with Cursor.
- setup proper cursor rules in .cursor/rules with specific domain knowledge
- break down tasks into small incremental steps instead of tackling everything at once
- use git for version control as a safety net
- create documentation files (http://prd.md, http://specs.md) as reference context for the ai
- use @ references to provide specific context from files
- start new chats for each task to avoid context bloat
- use reasoning models (e.g 3.7 max mode) for planning, regular models for implementation
- add detailed comments about your project goals
- plan with “ask” mode, then implement with “agent” mode
- be specific with prompts. clear instructions get better results
- maintain http://todo.md files to track progress
- use MCP for advanced control
- adopt test driven development
- structure code with solid principles
- tag all necessary files when providing context
- understand the limitations of ai coding assistance
- avoid over-reliance on the tool for critical tasks
And then from Swyx. I don’t know what this means, but I’m sure it’s smart!
Unbundle run() into:
- init()
- continue(id)
- cleanup(id)
never assume you will call these in order
Always checkpoint and resume from id’s.
Pass nothing else.
This forces you to keep things serializable and therefore loggable, reproducible, parallelizable.
by the way try not to name them ‘id’ if you can add extra detail like ‘runId’, ‘taskId’, ‘subTask2Id’.
any time you cross a system boundary*:
RATE LIMITS, TIMEOUTS, RETRIES, LOG TRACE.
NO EXCEPTIONS.
*eg call microservice or external api. yes, this includes llm APIs, meaning every ai project needs this
Update: fun post on how hard it is to turn something into an actual product. Vibe coding is great for personal programs on one’s machine, but for actual products you still have to go through deployment, billing, etc.
One of the genuinely positive things about tools like Copilot and ChatGPT is that they empower people with minimal development experience to create their own programs. Little programs that do useful things - and that’s awesome. More power to the users.
But that’s not product development, it’s programming. They aren’t the same thing. Not even close.
I think I agree - getting more people to code is very cool, but it’s another level to turn that into a professional product.
Fun post from Grant Slatton on simple tweaks to add to a tool to help efficiency.
the first trivial improvement is to make it so the system prompt (i.e. CLAUDE .md) is not just inserted at the top of the session, but somewhat frequently injected/repeated, because long sessions it tends to be forgotten
the next trivial improvement is if the filetree for the project is pretty small (and it almost always is for most vibe-coding projects, less than 100 files), the whole file tree should just always be in the context window
not only that, but each file should be tagged with a few sentences describing “what is this file about” that gets updated when the file is edited
so when i give a command like “read the docs in the image editor client and…” it shouldn’t be like, doing “grep -r editor” to find the file, it should just open client/images/editor.js or whatever because it sees the whole filetree, and also knows what that file is about
you can extend this file index to include a sub-file index, i.e. chunk up the file recursively and for each chunk, write a paragraph about “what is this chunk about (and how does it fit into what the parent chunk/file is about)”, and keep that thing updated as you edit
so you have this whole index of all the code, what does what, where it is, etc
the next trivial improvement is to give it access to the LSP (language server protocol) of the language, so it can do things like “find definition” or “find all references” etc, this is a billion times better than just grepping
the next trivial improvement is to allow for more controllable flowcharts for “how to do good software dev”, and provide some decent defaults, just more structure since the models aren’t smart enough to not go off the rails yet