AI can’t write programs, at least not complex programs. The programs / functions it can write well are the ones that are the ones that are very well represented in the training data – i.e. ultra simple functions or programs that have been written and re-written millions of times. What it can’t do is anything truly innovative. In addition, it can’t follow directions, it has no understanding of what it’s doing, it doesn’t understand the problem, it doesn’t understand its solution to the problem.
The only thing LLMs are able to do is create a believable simulation of what the solution to the problem might look like. Sometimes, if you’re lucky, the simulation is realistic enough that the output actually works as a function or program. But, the more complex the problem, or the more distant from the training data, the less it’s able to simulate something realistic.
So, rather than building a ladder where the rungs turn into propellers, it’s building one where the higher the ladder gets, the less the rungs actually look like rungs.
As I said elsewhere, the AI probably isn’t going to just be an LLM. It’s probably gonna be a complex model that uses modules like LLMs to fulfill a compound task. But the exact architecture doesn’t matter.
We know that it can output code, which means we have a quantifiable metric to make it better at coding, and thousands of people are certainly trying. AI video was hot garbage 18 months ago, now it’s basically perfect.
It’s not if we’re going to get a decent coding AI, it’s when.
It’s probably gonna be a complex model that uses modules like LLMs to fulfill a compound task.
That sounds very hand-wavey. But, even the presence of LLMs in the mix suggests it isn’t going to be very good at whatever it does, because LLMs are designed to fool humans into thinking something is realistic rather than actually doing something useful.
We know that it can output code, which means we have a quantifiable metric to make it better at coding
How so? Project managers have been working for decades to quantify code, and haven’t managed to make any progress at it.
It’s not if we’re going to get a decent coding AI, it’s when.
LLMs are designed to fool humans into thinking something is realistic rather than actually doing something useful.
So closer to average human intelligence than it would appear. I don’t know why people keep insisting that confidently making things up and repeating things blindly is somehow distinct from the average human intelligence.
But more seriously, this whole mindset is based on a stagnation in development that I’m just not seeing. I think it was Stanford recently released a paper on a new architecture they developed that has serious promise.
How so? Project managers have been working for decades to quantify code, and haven’t managed to make any progress at it.
I think you misunderstand me. The metric is the code. We can look at the code, see what kind of mistakes it’s making, and then alter the model to try to be better. That is an iterative process.
The year 30,000 AD doesn’t count.
Sure. Maybe it’s 30,000AD. Maybe it’s next month. We don’t know when the breakthrough that kicks off massive improvement is going to hit, or even what it will be. Every new development could be the big one.
AI can’t write programs, at least not complex programs. The programs / functions it can write well are the ones that are the ones that are very well represented in the training data – i.e. ultra simple functions or programs that have been written and re-written millions of times. What it can’t do is anything truly innovative. In addition, it can’t follow directions, it has no understanding of what it’s doing, it doesn’t understand the problem, it doesn’t understand its solution to the problem.
The only thing LLMs are able to do is create a believable simulation of what the solution to the problem might look like. Sometimes, if you’re lucky, the simulation is realistic enough that the output actually works as a function or program. But, the more complex the problem, or the more distant from the training data, the less it’s able to simulate something realistic.
So, rather than building a ladder where the rungs turn into propellers, it’s building one where the higher the ladder gets, the less the rungs actually look like rungs.
As I said elsewhere, the AI probably isn’t going to just be an LLM. It’s probably gonna be a complex model that uses modules like LLMs to fulfill a compound task. But the exact architecture doesn’t matter.
We know that it can output code, which means we have a quantifiable metric to make it better at coding, and thousands of people are certainly trying. AI video was hot garbage 18 months ago, now it’s basically perfect.
It’s not if we’re going to get a decent coding AI, it’s when.
That sounds very hand-wavey. But, even the presence of LLMs in the mix suggests it isn’t going to be very good at whatever it does, because LLMs are designed to fool humans into thinking something is realistic rather than actually doing something useful.
How so? Project managers have been working for decades to quantify code, and haven’t managed to make any progress at it.
The year 30,000 AD doesn’t count.
So closer to average human intelligence than it would appear. I don’t know why people keep insisting that confidently making things up and repeating things blindly is somehow distinct from the average human intelligence.
But more seriously, this whole mindset is based on a stagnation in development that I’m just not seeing. I think it was Stanford recently released a paper on a new architecture they developed that has serious promise.
I think you misunderstand me. The metric is the code. We can look at the code, see what kind of mistakes it’s making, and then alter the model to try to be better. That is an iterative process.
Sure. Maybe it’s 30,000AD. Maybe it’s next month. We don’t know when the breakthrough that kicks off massive improvement is going to hit, or even what it will be. Every new development could be the big one.