Skip to main content
923

July 28th, 2025 ×

Getting the Most Out of AI Coding

or
Topic 0 00:00

Transcript

Wes Bos

Welcome to Syntaxuty. We got a quick episode for you on how to get the most out of AI coding. So these are a bunch of tips, that I've come up with myself. Scott added a whole bunch of his, and I also reached out on Twitter, asked everybody, what are your best tips for getting the most out of AI Node, especially this is not just, like, prompt engineering, but, basically, how do you approach problems? How do you ask for things that you want? How do you supply the information so that you get the best possible Scott out the other end? My name is Wes Bos from Node. With me JS always, mister Scott Tolinski. How are you doing, Scott?

Scott Tolinski

Hey. I'm doing good, man. I'm wearing a sweatshirt in the summer because I don't know about you.

Scott Tolinski

My air conditioning only cools, like, the Bottom Floor of our house. And then, like, the Top Floor, super warm. I think that's common, but, like Yeah. Yeah, man. We'll have the air conditioning on, and every morning, we come downstairs. And our our First Floor is, like, freezing because the thermostat there's a thermostat upstairs.

Wes Bos

So I gotta figure that out. I gotta get something going here because this is like, it's you know, the upstairs, you're sweating your your butt off, and downstairs, you're an ice cube. Drives me nuts that HVAC is not better. Like, the amount of people that I I talk to that have brand new houses, and they're like, yeah. That's kinda hot in this one room. It's like, can someone please figure out that just and the move, honestly, is these these little heat pumps. I've got one one right here, the mini splits Yeah. Where, like, every room should just be able to you can turn it on. Like, why are we cooling an entire house just to keep one thing JS and, like, I'm in a I'm in a basement usually, and I've I've got foam stuffed in the vents because it's, like, so cold in our basement, and then the top Floor is just cooking.

Scott Tolinski

Yeah. I think that makes so much sense because, yeah, you'd just be cooling a single room. My I I have a since my office is detached, my office is attached to the garage.

Scott Tolinski

I have a mini split inside of here, and it cools this room so quickly. I don't I mean, honestly, I don't I don't know deep into the energy side of things what's most efficient or whatever. But, like, it is it is very fast how fast it cools one specific room. I I would love that. Mine is busted right now. So I have this, like, window shaker,

Wes Bos

and it's busted. The fixed one comes.

Wes Bos

And, like, it's it's 25.9 in here right now. I had it down No. This is the rest of the world uses Celsius. Hold on. K. We have to convert it for these. It's very cold. I'm I'm just 25.9 Celsius to Fahrenheit is 78.62.

Scott Tolinski

So I'm gonna be dead by the end of this episode, so let's get going before I Let's get too hot here. Get going. Yes. And since we're talking about AI, your computer's gonna get going too. No. Just kidding.

Scott Tolinski

Let's get into it. So how do you get good results from AI when you're coding with AI? Not not like not building AI into your apps, but using AI too cold to code, not too cold.

Scott Tolinski

How do you get good results there? So, Wes, you wanna kick it off with your Scaffold tip?

Wes Bos

Yeah. So I think a lot of people think, like, these AI apps, you're just gonna, like, one shot type in what you want. It's gonna poop out an app the other end. It's gonna be absolutely perfect. And Make an app. No bugs. Thanks.

Wes Bos

Yeah. Part of part of me has been, like like, recently, I've been, like, starting apps, being, like, build an app that does x, y, and z. And I found myself having to create, like, a starter prompt being like, don't use Axios.

Wes Bos

Don't use Express.

Wes Bos

Don't use Tailwind three. Use Tailwind four instead. Don't use Tailwind. Don't use Tailwind at all. There's all these, like, things that I've had to say, and then I find myself using the, like, the next twenty minutes being like, don't use stop using this and this instead of this. I'm like, I forgot about that. And quite honestly, it's not worth your time to try to scaffold out the application, with all of the choices that you want with AI. So what you should do is scaffold it out yourself.

Wes Bos

At least install all of the dependencies that you want.

Wes Bos

You know? Go ahead. Because almost always what it does for me is it just does, like, a v Sanity TypeScript template, and then it goes in and npm installs everything that it wants, and then it it changes everything that it wants. I'm like, I coulda done this in just, like, in, like, six minutes instead of just sitting here waiting for it to to do all of its work. So my thing there is scaffold out that very first five or 2% yourself, and then you have a really good base for for what it is that you want. Even go as far to to maybe do one little example, for the coding starts that you want and then pick it up from there.

Scott Tolinski

Yes. Totally agree. The CLIs for installing these things are very good. They're so good. Like, the, Svelte SV CLI for getting a project going, very good. We all know how to install Npm things. We all know what we want.

Scott Tolinski

Why would we be leaving this up to the AI to kind of, like, guess and then have to do text prompts for it and stuff like that? It is just about always faster for me to do npm install, get all my dependencies, get my project up and running, and then start getting into it. Some people might disagree with this, but I I think, personally, it it saves me quite a bit of frustration getting going. Another one is to be clear with your prompts because we might think sometimes that the thing the words that we're saying or whatever can be received well by the the LLMs and and understood.

Scott Tolinski

But if you are very clear, like, do not use blank. Use blank as opposed to use blank, not blank.

Scott Tolinski

It sounds like a subtle difference, but I do think that, like, being very clear with these prompts is the way to go. For me, it's always a battle with Svelte.

Scott Tolinski

Use Svelte five syntax always.

Scott Tolinski

Like, that is, like, one of the the big ones for me that I I always have to tell it or have inside of my rules or any of these things like that. So you need to be clear with what you're asking it for because at the end of the day, it's it's gonna choose to do what it wants to do if you don't give it enough context.

Wes Bos

Yeah. It's it's funny because as as humans, you can say something like use tailwind four, not three. Yes. And and, obviously, to the to a human, you think, okay. Of course.

Wes Bos

Don't use Tailwind three. I can I can infer that? But when when you have to leave all of that up to the AI to infer instead of being explicit with what you want and this tip will probably fade away with time as they get better, but I found that being as explicitly clear as you want and not trying to be cute or whatever, just kinda short with your thing JS is much more important. So be extremely clear with which versions you want, which what not to do, which to use instead. So, for example, I I would say in one of my beginner promises, use CSS Grid for layout when possible instead of using Flexbox.

Wes Bos

And it's Node something like use Grid instead of Flexbox for layout.

Wes Bos

Yes.

Wes Bos

The more clear is better.

Scott Tolinski

More clear is better. Yes. More better. Clear is more better.

Wes Bos

One tip I've gotten from a lot of, like, cursor rules or LLMs dot TXT, things like this, is that using, XML tags around your specific items is a very good way to tell the AI where you Yarn starting and stopping. And and what I mean by XML tags is you're just making up tags. So you might say, like, coding dash style, open tag, and then write all of your your preferences in there and then close it. And that's a really neat way to specify to the LOM that this is a contained piece of data. Or example code, open it up and then close it up. And anytime that you want to clearly delineate a start and stop of something JS just wrap it in a made up tag, and that will, really go further to to give context to the LLM.

Scott Tolinski

Yeah. I have not, ever done that, personally. I did not know that was a thing. And in that same regard, let's keep going on this. Utilize these files. The a lot of these editors, you know, the IDs that have these things, whether that is Copilot, Copilot rules or cursor rules, use these features because, nowadays, they're getting better and better where you can even have folders of different rules and organized rules and multiple files or using markdown for these things for better organization.

Scott Tolinski

Utilize the rules files because they will help you not have to continually stress rules inside of your chats, inside of your your your AI sessions here. We're like, if I have to continually say, please use this syntax, please do this, please do that, you may still have to, at some point, continue to reference rules and and code styles and things that you want. But the the editors have these features for a reason. So use these files to specify how you want your application to be, where you want things, what coding style, what Vercel.

Scott Tolinski

All kinds of stuff can go inside of cursor rules or inside of Copilot rules, and make sure you use those features.

Wes Bos

Yeah. I've been working on one for for myself, which JS, like, a standard modernization prompt. Because one of my biggest frustrations is when it tries to use old tech. I was talking about that just earlier. Right? So I've been working on this one myself, which is just like here here this is just like my my my little prompt. Create a full stack Airbnb clone, blah blah blah blah. And as I was working on this, I was realizing, yeah, it's it doesn't make sense to to tell it to scaffold it out yourself. It's probably faster to just scaffold it out yourself unless specified, generate code with these rules, Use TypeScript, not JavaScript.

Wes Bos

Add appropriate TypeScript types.

Wes Bos

Use ES modules exclusively, never common JS.

Wes Bos

You may need to set type module and package JSON. So, like, that's another one JS it just tries to use require syntax all the time.

Wes Bos

If you're using TypeScript, it tries to go to use t s Node or Wes, which you don't need those things all the time. You can use the x, the strip types flag in in Node. Js now and just use TypeScript directly in in Node. Js.

Wes Bos

Same with ENV file. It always tries to install dot env package, and then you have to use it. You don't need that anymore. You Node. Js has these things built in. Right? Mhmm. So I just have been collecting these things. That's a great way to do it. And you can bring it from project to project. Right? What my my next move here is is once I'm, like, happy with this Yeah. I'm going to publish this as an MCP server because Oh.

Wes Bos

The MCP server has of course, it has tools, but another part of the MCP server is prompts.

Wes Bos

So you you can just use the prompts, in there. You can say, hey. Use my modern JavaScript prompt, and then it will just go and grab the modern JavaScript prompt, inject that into the thing, and then I can install that MCP server globally or, in whichever package or whichever project that I'm working on. And then I just I don't have to update it or copy paste it around. It simply just reference, like,

Scott Tolinski

Wes m c p dot Wes Bos dot com. So Yeah. That's my next move. Do you think that's more efficient than having it as a rules file that you're just moving into your product or product? I don't know. I Because then you end up having the whole MCP server side of things instead of just, like, a file.

Wes Bos

Yeah. But the the other thing is that, like, there's so many standards for rules right now. Yes. You know? There's there's probably six or seven different, like, customized rules, Crystal Rules, LLMs dot TXT. There's there's probably six or seven of them out there, and I don't know how to keep moving that around. And I don't know if MCP prompts will will replace cursor rules. You know? Like, are those the same things? Because in MCP prompt, it would just go and grab it once, and then it's part of that chat history. But what happens when you open a new chat? Do you have to tell it to grab that again? Whereas, like, a cursor rules would just be To exist. Yeah. So I don't know if those those things are totally replacements for each other or not. I'd love to hear anyone who's listening what their thoughts are on that. Yeah. Totally word.

Scott Tolinski

Yeah. Another one is you can ask AI itself to create rules based on your existing code base. If you have code and you want the new code to adhere to the code base, yeah, these things can be aware of your code base, but it it works better if you have rules. Right? So being able to have a rules file that is generated, that you don't have to type, is also a a good option. Yeah. It's it's always hilarious that, like, the the things that these LLMs need, they can also just generate themselves.

Wes Bos

Yeah. So you're gonna say, hey. This is a code base that I like. Or if you're starting a new code base and you really like the way that another code base has worked, you can ask it, hey. Create a summary of the standards and coding styles and technologies used in this project and export it to a file, and then you could just take that file and throw it in somewhere else.

Scott Tolinski

Yeah. Also, along the lines of the previous thing that we were talking about in terms of, like, format and stuff, I mean, have you have you used the MDC file extension?

Wes Bos

I have not used it directly, but I've I've seen it pop up in a couple of my project that I've scaffolded out, which is it's what is MDC? It's markdown cursor? What is that?

Scott Tolinski

Yeah.

Scott Tolinski

It's a markdown file that basically has front matter that cursor can un like, if if you look at it as a markdown file, it is just a markdown file with front matter. The reason why MDC JS an extension is there is because Cursor knows that it should use a UI when it's editing that file. And so you can, like, give it, settings, essentially. Like, is this is this rules file always

Wes Bos

being attached? Is it always on? Is it agent requested? Is it only manual? And then there's pattern matching. So, okay, this is a rules file that should only exist when I'm, you know, editing TypeScript files that start with the word Scott. You know? Yeah. That's I I like that a lot, especially because, like, yeah, you want specific rules to be attached to specific types of files. But at a certain point, those are gonna get so large that you're stuffing it all into a single prompt, and then it starts to get watered down.

Wes Bos

I hope that we will see a standard, to this type of thing because, like, we're already, the, like, dot cursor rules file is is considered legacy. And now the, the rules file or the project rules is what everyone is using. I hope that everyone just sort of, like, standardizes on one sort of input similar to how we're doing it with MCP, but we'll see.

Scott Tolinski

We'll see. It's it's not a bad idea to accumulate

Wes Bos

code styles and things, though, regardless in markdown. Yeah. This next one is probably the most important tip for getting good quality code out of an LLM, and that is breaking your project down into very clear, concise tasks.

Wes Bos

When you ask an LLM to do too much, it will it will do too much, and it will get muddy. So if you can break down what you want into clear, concise, actionable items, give it a nice clear task list, the ability to figure out if it did it right, then it is able to attack each of those one at a time and do a much better job. And, also, just from, like, your point of view as well as if, hey. Build an entire app that does, Uber but also has, dog walking built in and just go. You know? Like, it's just gonna create a mash of garbage. But if you're very clear with what it needs to do, especially if you're working on an existing project, create a component that does x, y, and z, and then go ahead and scaffold out the database, and then go ahead and make an API that it can can talk to. And it will do those things separately, and you get much better quality code out of there. And you, the developer, has a much easier time reviewing what needs to be accepted and and modified.

Scott Tolinski

Yes. For sure. And just to give people some even further there, where do you put these things? Where do you put the task list? Where do you put the PRD documents? Like, where do you where do you would, like, store that?

Wes Bos

I will either pop it into a markdown file that is in, like, the root. And it's kinda cool because you can put checkboxes beside each of the tasks, and you can have the AI just mark it as checked. And it can also just inject any logging that it needs to do, you know, like how things work, how it approached it. So I'll either do that. Like, I'll put it in, like, a spec dot m d, and it will go it'll go through that and check things off as it's done, or you can just dump the entire document into the prompt.

Wes Bos

That works well for to a certain ESLint, but if it's too large at a at a certain point, it'll only go back and forth so many times. Whereas, like, we're starting to see that limit be increased, especially with things like cloud code. Like, it it can run or or JS a cursor background agents, they'll run for hours working through the task list before you need to sort of intervene.

Scott Tolinski

Yeah. Yeah.

Scott Tolinski

That still freaks me out. I I'm definitely more of a hands on type of person, but I I get that. Yeah.

Scott Tolinski

Utilize llm's.txt files where possible. This is something that it's a trend that you're seeing now. You know? I don't know who started this first. Svelte was the first library I saw do this when they released their new version. But many of these libraries now inside of their documentations, they do have llm's.txt, which is just a text version of their documentation for l l m's to consume. You can add them as docs inside of Cursor, or you can just use them to reference at any time when you need to reference them inside of your LLMs in general. These are good things to know that they exist, especially if you're working with, more modern documentation and things like that. But you also, Wes, have a separate tip for this that is slightly different here.

Wes Bos

Yeah. The contact seven, we've talked about this on a couple of shows already, but contact seven is a m MCP server. And what that will do is you hook it up to your your IDE or you hook it up to whatever whatever accepts MCP, a a chat app or an IDE or literally anything, and it provides it with tools to search for and download documentation for the different libraries. So I can do a really simple one where I just say, what do the docs on Tailwind four say about margin? And then it will go off and and search for it by the library name, and then it gets some results. And then it will pick the one that has the highest confidence, and then it'll go off and download the documentation for that. And then it'll inject into the prompt, and then it will run your query. Right? So a couple back and forths.

Wes Bos

And that's great because then you don't have to go find the sveltellm.txz.

Wes Bos

Simply just say, reference the docs for Svelte five and and do x, y, and z, and it'll go off, download it, inject that into your prompt.

Scott Tolinski

Word. Another one is tag the files or functions, etcetera, that you want to be working on. Because if you're in a chat window and you start saying, hey. Do this and do that, it can go off and choose where it wants to do this or that. But the the chat windows have an ability to say at function, at file, and you just start typing the function name or the file name. And then that can give it more direction in terms of, like, where exactly or, like, I don't know if you've ever had this before where you ask it to do something and it, like, starts creating a new function for you when you already had one that's, like, doing Yeah. You just wanted to modify. So, you know, giving it that picture, like, again, you're talking to a junior developer or something. You're talking to an assistant coder. Hey.

Scott Tolinski

Give me this function that does this.

Scott Tolinski

Instead of like that, you could say, hey. Reference this function. Use this function. Modify this function, whatever, and and give it clearer instructions.

Wes Bos

I I have to use that a lot when I have, like, a folder that just has, like, six or seven different random scripts for something I'm working on, and it will just try to go and modify all of those scripts because it thinks that they work together. So you have to tell it, hey. Use this example.

Wes Bos

Yes. Next one we have here is feed the logs back into the AI. This is pretty simple, but if you don't have your IDE hooked up to your terminal and or your your browser, so that the ID can can see the actual logs that are coming in and out, then it's a great idea Wes you have troubles. Simply just copy paste what went wrong and dump that in there. So being able to feed all of the logs and also going through debugging. If you hit a problem where you go back and forth six or seven times, like, it's still not working. It's still not working. It's still not doing that. I found that what it will do is say, okay. Let's implement some debugging, and then it will try to console log, add a whole bunch of console logs, or add a whole bunch of tests for you. And then it will say, alright.

Wes Bos

Go get me those logs if it's not hooked up already, and just copy paste them back in. So there there's so much value in logging in your errors, and, like, that's why Sanity them. SentryMT is is so helpful as well because there's so much value in what went wrong, and being able to prompt the LLM and provide that context is is key.

Scott Tolinski

Let's talk about Sentry then. Hey. Yeah. Good time to do that. There you go. Let me just tell you. You almost I think you may have accidentally, come up with a new Century tagline. There's so much value in what went wrong. My gosh, Wes.

Scott Tolinski

That that was so so poignant the way you said that. And let me tell you, SEER, which is the new AI debugger within Century, like, really understands that there is so much value in what went wrong because, I'm doing the thing that you that everyone was doing to you at the, Syntax meetup where they were all just reusing the things that you said. I I'm taking the thing that you said and not running with it like it's my thing. There is so much value, folks, in what went wrong. And SEER, again, the AI debugging tool from Sanity, is, like, the perfect place to really give you an understanding of, like, what actually went wrong because we've all seen logs. We've all worked with logs. We've all worked with errors.

Scott Tolinski

And the root cause of things isn't always available to us in terms of our brains and how it works. SIA really understands the root cause of our issues and allows us to fix them. So give it a try at century.i0/syntax.

Scott Tolinski

Sign up and get two months for free using the coupon code Sanity treat or just, man, check it out. Check it out. It's it's just really neat. There's a lot of cool new stuff coming from Sanity. I feel like they're always evolving.

Scott Tolinski

Speaking of always evolving here, your long running chats. Man, what about long running chats inside of, AI? If the longer you have a single chat going, most likely, the worse it will will get in terms of, like, forgetting things that you've already told it because its context window has been, filled up or just giving you kind of may some I you know, you get into a situation where you're just going in a Vercel and you're looping and looping and like, no. You already tried that. And at that point, if you're getting into that point, it's best to start a new chat because, yeah, they they will just get into loops over time.

Wes Bos

I find myself creating a new chat every time I'm working on, like, a different feature or problem.

Wes Bos

I like to keep the existing one in case I have to come back to to that one because the context windows are getting much larger.

Wes Bos

And that, like, problem of it getting overwhelmed is starting to go away, but it still can happen, especially if you've gone back and forth, like, two, three hundred times. Yeah. It's just like, this is this is a mess. I need to totally scrap it. So I find myself just, yeah, fresh, new window, and doesn't have the memory of everything that went wrong, especially if you've if you've tried three or four approaches and you wanna, like, forget all of the past approaches that you've done in the past.

Scott Tolinski

Yeah. With a doubt, throw it out. That's how I feel.

Scott Tolinski

Cool. Do you have any other tips here?

Wes Bos

I think that's it. I'll put a link to the tweet through it as well because I've got, I don't know, several 100 very good replies from a bunch of people in there. Some, not so good. It's it's also kinda funny because I got a lot of people think that things are just like I don't know. Tell it tell it that you're you're going to ruin its family if it like, people have all these little hilarious things. Like I'm going to ruin your family. Yeah. Your family is going to have no food if you don't get this right. Like, I don't believe I don't believe some of that. But, a lot of people are simply just like, one thing one thing at a time, planning or coding, tell it what to do, tell it what success looks like, a lot of the things that good development already has.

Wes Bos

But I'll I'll link it up if you wanna take a look at the replies and see if there's any more nuggets in there.

Wes Bos

Yeah. And just try stuff. See what works for you. That's always a good good meme. Yeah. Alright. That's all we got for today. Thank you so much for tuning in, and we'll catch you later.

Wes Bos

Peace.

Share