-
Notifications
You must be signed in to change notification settings - Fork 0
Description
I've been having some incredible times vibe coding here lately I'm really starting to get some interesting tools and skills gathered, developed So let's see. I want to talk about a I' want to talk about a few things. One is a rewrite of an existing system that I experimented with last night and one is a demo for like a prototype for a pitch meeting that I did a couple of days ago. And the speed with which I was able to get these things out is outstanding. And I don't know how much of this is the increase of the intelligence of the models that primarily my IDE is using because I just use I just use auto select in cursor and let it decide what model to use and when and. So it seems to be pretty smart at knowing when to use what model for different reasons, whether it's typically falls into one of two categories, I think. One is do you like, how intelligent of a model do you need for this task that you're trying to do? You know, like, I think at this point, it's pretty obvious that any LLM, any basic model is extremely competent at piecing shell scripts together, and Bash commands. And so. Like, you don't need as many tokens. Like, those are cheap tokens. So, um, there's that. And then, like, availability. Like, if if a lot of people are really banging away at a particular model, a particular API then that model might only be available to premium users. So you can just skip to the next model. Just skip to the next best thing.. And also, I think a lot of people waste a lot of tokens. Um. asking for the same thing. Like, what you want to do is. You want to learn as much as you can, you know, like, try to force yourself to use it as a learning tool. Because the temptation is strong since with like with what it what they know how to do. I'm just going to say it. And then I'm like, AI in general. what it's competent at doing It's way faster than you are. It's It's like it's not even it's not even a competition anymore at this point. If you can say. You take a particular task, like for a given task,, if you can say that an AI can do a competent job at that particular task, it's going to do it way better faster and at a scale that you can't possibly compete with. And so... the temptation is to not learn. And it's really a like, you have to keep a balance. Like it's it's a trade-off. Like sometimes you just got to have to say, you know what? I'm not going to learn that. My just like, um Just like like any, you think of your cognition as a resource, as a finite resource. Um. Just like you could say more and more that like my my attention isn't worth. Like, I've got things that are more worthy of my attention than that. So I'm not gonna pay attention to it. Learning is one of those things. Learning is a cognitive task that consumes brain resources. So what you have to do is kind of evaluate what you want to learn, what it's valuable to learn. And the temptation is to not learn anything, especially since. the the AI is going to be so much better than you. And it's it's not always obvious that it is too. Like, it's usually. The danger increases. Like, the probability that you are better than the AI increases like the danger of leaning on the AI increases typically, not always, but typically if it's something that you don't know or understand. So a lot of what I find myself doing is trying to read really, really fast when I when I try to get it to do things that I'm not super familiar with. So I'm just reading the output and just trying to just have at least a high level overview of what it's saying that it's doing. Um. Because that serves as a, like, it's really tempting to just sit there and let let the output crunch and just be like. I'm going to watch a TV show while I hit next and just ignore this output. And it's really easy to say, you know, or you know, whatever. I'm just using that as an arbitrary example of something that is unproductive. You know, you could also say. You'll let it run while you go do another task and then come back. Usually what I try to do is pay the most attention when it's something that, like, if I have a high level of confidence, that this AI is going to perform that task well, then I'll flip over to something else. And okay, so I got this really cool technique last night where I keep a development history document and I keep a two-do document. And so basically what I'm doing is. I'm trying to increase the the level of advancement or, um. What is the word?. I'm increasing the level of like complexity of what I task it to do and um and I give it really clear instructions on how to do each part of it and what I want to do. And in this particular case, it's a rewrite of an existing system. So I'm really just recreating a lot of existing functionality in a code base that's so legacy that it just doesn't make sense to try to upgrade it. Um. Now. So. So it's got like examples of the different models that it can use to as a specification for the schema of different database tables. For example, and even the layout of the user experience, the user interface, and the user experience. So what I can do is tell it, you know, go, find the client model or the industry model and, you know, build the schema, build a migration, build a. Oh, and I'm doing all this in Verselle with super base as a database. So it's really nice to be able to just connect all these cloud services and manage none of the infrastructure because this is like a. This is a start up MVP. So Verselle and Super Base are actually pretty decent solutions that are viable for a pretty solid size user base. So I started getting a sense of how little scale we need to. I say little. It's all relative. So the scale that we need to be able to support for MVP and S started to think, you know, we're not going to need anything heavier than, um. Super Base and Verselle for a pretty good while, and we've already got the AWS infrastructure in place to We've already got some pretty solid foundational AWS infrastructure. So it won't be a huge job to deploy these workloads into more elastic high availability AWS infrastructure when the time comes. And in fact, what it's probably going to be is more about cost optimization than performance and reliability. That's just kind of shooting from the hook. Could be wrong about that. The only way to know for sure is to run the experiment and just be ready to adjust one way or the other. If you start experiencing performance bottlenecks, or if the cost is just eating you up for these services as they scale, well, then you invest in the engineering cost of deploying infrastructure. And we've already done a pretty solid amount of that. which is also went pretty fast because I did all of the infrastructure thus far in terraform, which gives me the ability to leverage my IDE agent to quickly launch and maintain and develop and refactor infrastructure resources. So that's pretty nice. I would love to do a deep dive on the power of terraform with AI coding agents and how like the power of managing infrastructure with this kind of a stack, but right now, I I want to talk more about the workflow that I have with the to do document. So I've got development history in one document, and that is like, that is a long running document that at this point is just taking the commit history and writing it to a document. to a formatted markdown and, you know, over time it'll be, I may actually add like a troubleshooting document. Usually what I've been doing is, yeah, that's what that's what I'll do. I'll have multiple sets of troubleshooting documents and feature planning specification documents, and then like a master development history document and it'll link to different parts of the other documents. And then the two do is more of. Like, it actually doesn't grow. Well, it doesn't, it doesn't grow continually over time. Really what it is is kind of like a task management system where rather than create tickets for this stuff,, I just keep, oh, so this is interesting. From a project, a project management perspective, right? Ordinarily, you would have some kind of a project management system, whether it's like JR or as a developer, I just prefer to use GitHub issues. Some people like things like Asana, you know, there's all these different products. And so, you know, you create this ticket and it's an object in a database and it's in this web-based solution where everyone has access to it and all kind of nifty bells and whistles, notifications, little tags that you can add and little rich text fields and stuff that you can attach and, you know, all these nifty little like comment sections and stuff. And that's great when, when you've got a team of humans, especially if you're limited to a team of humans, and a lot of these solutions, I'm sure at this point, I remember it wasn't long ago. I was working with a team that had that leveraged a sauna and Asana was just starting to incorporate AI agent AI features. And so I'm sure everybody's leaning heavy into that stuff. But what I found interesting just last night is forget the tame of humans. Just me and my IDE with a single text document I don't even know if I've blogged about this, some of this yet. So If I authenticate, obviously I'm authenticated with the GitHub CLI and the AWSCLI. Now I've found that. Well, I don't know how important this is. The Jira CLI. I can authenticate with the Atlasian CLI. And uh So basically what I can do is, uh, so what I was what I did last night is I just started just a document. And what I've kind of been doing previously is just kind of like one chat at a time. I'll like watch all the output and uh, you know, follow along. And when it's done, I'm there to like tell it what to do next. And so I'll type up. Once it's done, I'll type up what I want to do next. And a lot of times it's thinking and it's changing its mind and, and it's. It's not always valuable. It's not always necessary and it's often not even valuable to read all of its train of thought other than just to get a feel for kind of how it thinks. Like if things go wrong, you're going to want to review that the thought train to find out where it went off the rails or if it does something stupid or if something just doesn't work or he gets stuck in sort of a loop and it can't break through, you know, sometimes you just got to go understand things on a deep technical level and leverage your human intuition in order to break through to that next level. I really do think that we're still completely dependent on humans to create new knowledge. I don't know that we I don't know that we have evidence that AI has created any new knowledge. And the more I use it, it's weird because it feels like it's getting smarter and I can measure the performance increase. And it's just obvious all around that it's getting more useful. And you could probably in some way consider that what you're experiencing, like an engagement with an increased intelligence. But it's still fails to create new knowledge. My babies have all created more new knowledge in their first year of life than the smartest AI. And when I say smart, I mean, like, it knows crazy things. Like, it knows PhD level things. But I still don't really have evidence. And this, you know, this may be hard to actually. This may be actually really challenging to like, how do you measure this? Because, but I still think that the source of all artificial intelligence is new knowledge created by human imagination and intuition. The human nobility to, and it I wonder if it has to do. I'm sorry, I'm going off on a complete tangent here, but I do want to go a couple levels deeper with this before I circle back.. It feels like... a lot of times. It feels like you could relate it to the human ability to ignore things. You know, like. just like, uh, I know I've quoted this a million times before, but like, you know, Huxley in the Doors of Perception describes the brain as a reducing valve. And I always used to trip out over that because it was a real. It was a real deep seeming thing to a stoner to philosophize on. But like, really, like, when you uh Huxley was no joker. He was a pretty heavy hitter. The uh. That's true on such a deeper level than I even acknowledged at the time when it caught my attention. Like, we really are, our superpower really is to take in this flood of enormous amounts of information from the universe and just ignore everything but what is important to us. And I think AI, especially like neural nets, trained on reinforcement learning from human feedback, just these massive, these models that are trained on massive amounts of data with massive amounts of electricity, just like, just enormous data and electricity, just like, just processed at scale through this thing to train it and get it smarter and more useful. um That may be fundamentally, that like, I don't know, I don't follow enough enough cynical people cynical towards AI. I don't I find it most of the time, if somebody's cynical towards AI, it's usually like people who are doing it because they know they can get clicks, saying it. People who just naturally hedge against everything. And I'm more of an optimistic type person where like I want to believe in things and see how far we can take them. I get the importance of a hedge. I get, I get that like, you know, there's a lot of. There's a lot of shady people out there who are pitching ideas just to make money and they con people into investing big in vaporware. And so it's important to have someone of in a position of authority, not necessarily delegated authority, but like, you know, in a position of respect to hedge against it and say, no, I'm calling bullshit. You You're lying to these people. You're exaggerating this. It's not as valuable as you say. prove it. And that weeds out a lot of, but, you know, it also crushes a lot of people who would otherwise, I mean, look. I don't want to go down a whole train of thought related to you know, hedge funds and some of the damage that they do to inspired entrepreneurs um you know, I when I look at entrepreneurs, I think of them kind of like a Navy SEAL, you know, like, like I'm going to say, you can't do that. It's not going to work. You're going to fail. This is bullshit. You know, you might as well sit down, go home, grab you a blanket on the beach. We're cooking hamburgers, you know, there's no shame in quitting. I'm like, yeah, these people need to be ready to take an ungodly beating. So I want, I need to know, are you weak? Are you going to give up? Because you, no matter how easy we try to make it on you, it's going to be unbelievably hard. So you might as well just be prepared for the hardest imaginable possible thing. And and expect for it to be worse than whatever the hardest thing that you can imagine is. And if you just keep coming back for more, like, it's up to you to, okay, I didn't mean to go off on that tangent. But it's all good stuff. What I' What I'm trying to Oh, the two- do thing. Yeah, so... So I just keep this document and as the, like I try to build. Okay, so I keep this document and I just start listing off all the things. What do we want to do next? You know, what, what is, what is next and what is down the road? And I just, while it's processing, while it's thinking, I'm kind of halfway reading it, but I'm kind of halfway like writing what I want to do next. So that when all I have to do is copy and paste into the chat, okay, do this. Okay, now do this. Okay, now do this. And so I can just keep typing. And it actually, while the chat agent is thinking, I've got it set to auto suggest Marktown. So it'll be thinking with me ahead in the project. So I don't even have to type the whole thing out. And there's no harm in just like dumping it out. And maybe I'll copy and paste it. Maybe I won't. So what I imagine where I imagine this heading is like.. Okay, here's an interesting. I'll go ahead and finish the thought. Where I see this heading is adding another layer to the project management stack. You think of project management as a stack. So there may be some tasks that I want to assign to a human. But it may be more accurate to say that I'm going to assign it to a human who's going to orchestrate its own set of agents. And or his or her own set of agents, not to dehumanize a human. But yeah, so, and then also being able to do more on mobile and through automation. So that's another thing is evolving where the work happens to be more mobile phone-b and automated. So these are are the different levels. So I transitioned pretty much permanently away from a large desktop setup. Right? And it was probably one of the best things I ever did. I should blog about that in particular, too. But like, the, um. So the first thing I did was retire my desktop, just throw the monitor out, turn it into a server, and and just break myself personally, physically away from that machine and force myself to use the Mac for everything. And so that made me able to like just kind of bounce around and work from wherever. um and and use spaces instead of multiple monitors. So that was life changing. Now the tendency is to do as much as I can on mobile. And then the next obvious evolution there is to do more that's like entirely automated. Now, the full automation, you got to be careful about what tasks you automate and which ones you don't. Because you're going to waste a lot of tokens. If you just try to hand the whole project over to copilot, you really want to keep a hand on the wheel. So two themes that we're seeing emerge in this topic is one, the evolution of the evolution from big desktop environment to a more lightweight a more lightweight setup, which kind of forces us into an evolutionary pattern where you do more on mobile, just a phone, and automate more and more and be prepared in the future. But also keep yourself embedded. Keep your attention embedded. Basically what you want to do is balance balance spreading your attention, having periods of, I guess, I hate the word multitasking, because I like to rail against it. I like to There's too much of like an almost religious ideology towards how, you know, you're valuable people, multitask. So I'm not going to use that word, but I guess it's kind of pretty much what it is. Spreading your attention out across a vast number, like you don't want any of your AI to get too detached from humans and not because of like what people like Elon say, like it's going to leave us behind and be this super intelligence and we're going to struggle to be along for the ride. I mean, maybe in a different context, we could talk about that and I could get and get on board with it. You could bring me along with it. But like, what I'm specifically talking about now is for competency. Like the thing is not going the farther it gets from human attention, the more incompetent it gets. So you need to develop workflows that kind of balance your balance, your depth of attention and your spread of attention. So certain times you, you'll need to just throw out all distractions and focus in on a thing one of the most important things. Find a short list of the most important problems that you need to solve the most valuable val combination of most valuable problem to solve and most difficult problem to solve. And bite off one of those and block out everything else, ignore everybody and everything and just focus on that thing for an extended period of hours. And just give it your full attention and go as deep as you possibly can. But then balance that with periods of spreading your attention out across all the different agents that you have doing things for you. So I was trying to talk about like two things. One is. No, that's related to the first thing. All that stuff that I just said is related to the first thing. It's about attention. It's about taking your attention and finding ways to spread it out like tentacles almost, or like a web or like tree limbs, you know, just branch your attention off into vast spaces of work being done by agents that need direction, that need orchestration. And then the other theme that's emerging is don't think of AI as replacing anything, really. It enhances things. It augments things. It So when I think of like, you know, I've mentioned this before, like it doesn't AI workflow doesn't replace traditional workflow. It adds a layer on top of the stack. So like, for example, you have a framework, like next JS, right? And so you're building an application in next JS. And, uh. So that framework, you're not, when you add a framework. I've started to call it a framework. Maybe that is still the right word for it, but the reason for using that word is because I foresaw it replacing. I. I envisioned replacing a hard tangible framework as a code base with a set of prompts that would generate vanilla code, right? And now I see it more of like a layer in a stack where just like you have um machine level languages that have like higher level interpreted, like, like um So like you have your higher level English-based languages like Python or C, that are they're higher up on a software stack. And then like, you know, underneath that, you've got a kernel or and then you all the way down to like where you're giving the actual mechanical parts instructions. So it's like a stack of languages. You think of the framework as now it's not just the framework and then your novel code. It's the framework, and then a set of documents that are a combination of like prompts and specifications and plaintext descriptions that communicate ideas and logic that sits like a layer of a stack on top of the framework. And it's not all vanilla code. You're still using the framework. You didn't replace the framework you built on top of it, so it's a new layer in a stack. And that's a good way to think about. I wanted to tie this more into the progress that I had last night because it was pretty outstanding. I've built things that would have taken weeks to build just months ago. And I did them in hours last night and it's unbelievable. Really having a lot of fun with Super Bass and with Verselle. That's also how I deployed the prototype for the pitch meeting demo was in Verselle. I'd really, it's wild. I just started typing up a plain text proposal that I was just going to kind of like read through or use as almost like a speech cards or something. And and then that evolved into a like, well, why don't I just throw together a landing page with some graphical representations of things? And then, like, my IDE really just got carried away and started like building actual UI. So I was just like, you know what? If you're eager to do this, then I can pour more of my attention, my imagination, uh, and my intuition into this and we can create a synergy that will increase velocity and output. And that's exactly what happened in like two hours yesterday. I had a prototype that was a content management system, a CRM, and then like a bunch of mock-ups for a native app and some IOT device application software. But like the CMS was functional UI. And all I would have to do is add a back end to it. Like, I'm totally going to follow up with that and build it, whether they, whether they want to seed some money into it or not. But, uh, yeah, it's really exciting. So then I just like took that same kind of workflow. I really do think that it is a lot of it has to do with the increase in intelligence, which makes me excited for like, what is it going to be like a month from now, two months from now. It's time to launch a software product, man. There's no reason not to. If you have any engineering experience and your interested in using these AI teams, there's absolutely no reason not to just dive in and just take, even if it's just an hour a day or pull one all night or a week or whatever works for you the most, it is an exciting time. And if I can do these things now compared to the things that I could do just a couple of months ago, man. Now, look, a lot of it is me. A lot of it is me learning these tools and learning from these tools, like using these tools to learn more. So there's no magic silver bullet. that's going to do it all for you, but it absolutely is them pushing out new models. And there's some ethical concerns that I could get into at another time. But like basically, there's a lot of political pushback against the data centers. And look, I don't like the increase in electricity costs either. And there's got to be some kind of like a policy solution that will that will make sense. But I am leveraging this AI, uh, like I'm getting productivity gains. It's opening up doors for me. If I can just launch a software product that is if I can start launching software products with this stuff and have my own companies bringing in my own money, not just working for other people, then I can easily pay for the increase in electricity costs and and it seems like the response like um I don't want to get into too much of the details of this, but Sam Altman put out a tweet just yesterday about... He was really, really kind of dancing around what I think the main point of it was, is making models talk to adults more like adults. And I think what he is talking about is getting people who may be against the increased cost of electricity, that the data centers are providing. You know, people like me are building software solutions out of this increased intelligence. We're doing it faster. We're doing things we were never previously capable of, but most people are not. They have no idea what's going on and nothing really resonates with them. So I think what he's doing is using the primary driver of any previous technology, whether it was the internet or video games, GPUs, adult content, and I'll just leave it at that because I don't want this whole thing to be about that. But what I'm trying to do, though, is leverage the value of the output of these data centers to try to make the increased electricity cost worth it for me. And I think everyone should do that because otherwise you're going to end up in the in the electricity bread lines consuming the addictive content that open AI is gearing up to push out. I think that's. I think that's a pretty reasonably subtle way to word it. But yeah, I think that captures everything that was on my mind in this particular topic.