AI is intelligent, it just lacks sapience, sentience and other things humans have.
It is not. A key component of intelligence is being able to infer knowledge based on generalizations of previously obtained knowledge. Convolution neural networks are not capable of this. They need to be trained on the data to predict results. They can’t conceptualize abstract ideas and apply them to predict never-before-seen circumstances.
The term was created by academics to describe the usage of computers to solve problems previously only solvable by humans or other intelligent creatures.
Correct, but that’s not what the modern usage is referring to. The academic term is referring to artificial general intelligence (AGI). The thing the capitalists are trying to sell using the term AI currently is just a predictive model.
If someone hangs a print of a famous painting on the wall, have they hung art on the wall?
No one is calling the printer an artist. Yes, the print is a piece of art. It’s a copy of something created with intent by the artist, not the printer. It doesn’t really matter that it’s a copy. That’s a very stupid argument if you’re going to “ship of Thesius” a print. It’s still a version of the original, just not the original itself.
If someone swinging a brush at random can create art…
How do you swing a brush randomly? Have you tried doing something random? You can’t really. Maybe they could build a machine that swings it randomly, though I’d say the act of building the machine is intentional, and artistic. The thing it creates is a piece of that process.
… I don’t see why using a pile of math, numbers and random noise to make an image can’t be art.
Because there’s no intention. A pile of math and numbers can be art. That’s all that anything digital is. They aren’t necessarily though. Without some intent behind those numbers being a particular set of numbers it isn’t art though.
AI image generation just happens to be a medium or tool that is nearly entirely the intent and creativity of the artist.
How so? I’m assuming “artist” here is referring to the prompt creator. Their intent is not taken into account by the AI tool. Only their prompt is. If you put the same prompt in then it’ll generate different results each time, even if the intent of the prompt creator is the same. That would imply their intent is not part of the creation process.
Point being: “art” isn’t some mystical human only thing.
I never implied such a thing. I think a sufficiently intelligent creature other than humans could create art. Again though, the product currently being called “AI” is not intelligent though. It can’t abstract ideas into concept that can be applied to unrelated subjects. That’s what would be required to make art.
Don’t gatekeep art based on the medium or method.
I’m not. I’m gatekeeping it on being creative. I don’t care that it’s digital.
They can’t conceptualize abstract ideas and apply them to predict never-before-seen circumstances.
That’s not the baseline for intelligence. Intelligence doesn’t even require thought, to say nothing of abstract thought.
that’s not what the modern usage is referring to. The academic term is referring to artificial general intelligence
No, the academic term is artificial intelligence, if what you’re referring to is “artificial intelligence”. If you’re referring to AGI, you typically say … AGI.
It seems you’re inflating what intelligence is, at the simplest levels, to be human level intelligence. Conceptualization and generalization are definitely properties of an intelligent system if they’re present, but they aren’t prerequisites. Intelligence doesn’t mean “very intelligent”, or even “stupid”.
Do you think a koala possess intelligence? I would say they certainly do, despite not being able to conceptualize abstract ideas, or handle unforseen circumstances. They struggle with “leaf not on branch”. But they can learn new information and apply it to their environment, as long as that information is along the lines of "where is eucalyptus leaf on branch?”
That’s a very stupid argument if you’re going to “ship of Thesius” a print. It’s still a version of the original, just not the original itself.
Well fuck you too, no need to be uncivil.
That’s not a ship of Theseus argument.
A question isn’t an argument.
As I said, I think it’s art too. If someone believes it to be art or it’s presented as art, I’m content to call something art.
As the person who’s saying sometimes things aren’t art depending on how it was made, I’m trying to figure out what your rules are so I can argue against them if I disagree. If art requires intelligence and intent, and being intended as art doesn’t matter, then something produced by a machine possessing neither might not be, regardless of the intelligent and intentional human input on the other side.
For the randomness bit, you can replace ‘random’ with “arbitrarily, unconcerned with the output, in a fashion rendering it unpredictable to them”. I’ll admit to using the casual definition of random.
To do so, put the canvas on the floor, dip your brush in the paint, close your eyes and move your hand in sharp jerking motions over the canvas. Technically not random, but the distribution of paint isn’t known to the artist until they look. Or, you could consider John Cages music, like I mentioned, or any of the other artists that have incorporated randomness into their art. https://artgallery.yale.edu/collections/objects/112012 might be one example.
It seems needlessly complex and exclusionary to say that every piece of the output must tie back to human intent, when so many artists try to not do that, and the creation itself was an act of intention.
If you put the same prompt in then it’ll generate different results each time
That’s not actually true. Some tools don’t give you the option to do otherwise, but the image generator is entirely deterministic. It is, after all, a computer.
The general process is that it will generate an image consisting of psudo random static based on an input seed (often the current time), and then use the CNN as a denoising algorithm on the image, using the prompt as a guide for how it “fixes” the image.
Hold the seed steady and it will produce the same result for the same prompt each time. You can also disregard the seed and provide your own static or not static image and it’ll deterministically try to correct it.
It’s as much a carrier for the humans intent as anything else that a person controls to produce an output.
With the “mystical human only thing” I wandered away from my point a little. The part about the threshold for art and intelligence was more to the point. The threshold for intelligence is barely different from “accepts information, which is utilized for a purpose”. A thermostat fits the bill. Everything else is a matter of degree or related phenomenon like sapience and sentience that synergistically enhance intelligence. As I said from the beginning, modern AI lacks those things.
In my opinion, art only requires that something be presented or perceived as art. Otherwise you start to run into issues where people say a certain tool disqualifies something as art, or that it’s not art because they don’t like or get it.
I’m not.
Well, you do seem to be quite specifically gatekeeping on method.
I’m curious if the awareness that the image generator is deterministic changes your thoughts, or that the inputs don’t need to be words.
I’m not going to continue this conversation, but I will say my computer science professor (years ago, before this current “AI” trend) who taught the AI course used the term AI to refer to AGI if nothing else was specified. From an academic sense, that’s what it meant if you didn’t say anything else.
That feels like a quirk of your professor. You should look into using a definition used by the rest of the field. Your usage makes it seem like your listening too much to people who are trying to hype AI, and not enough to people who are building it or invented the field.
It is not. A key component of intelligence is being able to infer knowledge based on generalizations of previously obtained knowledge. Convolution neural networks are not capable of this. They need to be trained on the data to predict results. They can’t conceptualize abstract ideas and apply them to predict never-before-seen circumstances.
Correct, but that’s not what the modern usage is referring to. The academic term is referring to artificial general intelligence (AGI). The thing the capitalists are trying to sell using the term AI currently is just a predictive model.
No one is calling the printer an artist. Yes, the print is a piece of art. It’s a copy of something created with intent by the artist, not the printer. It doesn’t really matter that it’s a copy. That’s a very stupid argument if you’re going to “ship of Thesius” a print. It’s still a version of the original, just not the original itself.
How do you swing a brush randomly? Have you tried doing something random? You can’t really. Maybe they could build a machine that swings it randomly, though I’d say the act of building the machine is intentional, and artistic. The thing it creates is a piece of that process.
Because there’s no intention. A pile of math and numbers can be art. That’s all that anything digital is. They aren’t necessarily though. Without some intent behind those numbers being a particular set of numbers it isn’t art though.
How so? I’m assuming “artist” here is referring to the prompt creator. Their intent is not taken into account by the AI tool. Only their prompt is. If you put the same prompt in then it’ll generate different results each time, even if the intent of the prompt creator is the same. That would imply their intent is not part of the creation process.
I never implied such a thing. I think a sufficiently intelligent creature other than humans could create art. Again though, the product currently being called “AI” is not intelligent though. It can’t abstract ideas into concept that can be applied to unrelated subjects. That’s what would be required to make art.
I’m not. I’m gatekeeping it on being creative. I don’t care that it’s digital.
That’s not the baseline for intelligence. Intelligence doesn’t even require thought, to say nothing of abstract thought.
No, the academic term is artificial intelligence, if what you’re referring to is “artificial intelligence”. If you’re referring to AGI, you typically say … AGI.
It seems you’re inflating what intelligence is, at the simplest levels, to be human level intelligence. Conceptualization and generalization are definitely properties of an intelligent system if they’re present, but they aren’t prerequisites. Intelligence doesn’t mean “very intelligent”, or even “stupid”.
Do you think a koala possess intelligence? I would say they certainly do, despite not being able to conceptualize abstract ideas, or handle unforseen circumstances. They struggle with “leaf not on branch”. But they can learn new information and apply it to their environment, as long as that information is along the lines of "where is eucalyptus leaf on branch?”
As I said, I think it’s art too. If someone believes it to be art or it’s presented as art, I’m content to call something art.
As the person who’s saying sometimes things aren’t art depending on how it was made, I’m trying to figure out what your rules are so I can argue against them if I disagree. If art requires intelligence and intent, and being intended as art doesn’t matter, then something produced by a machine possessing neither might not be, regardless of the intelligent and intentional human input on the other side.
For the randomness bit, you can replace ‘random’ with “arbitrarily, unconcerned with the output, in a fashion rendering it unpredictable to them”. I’ll admit to using the casual definition of random.
To do so, put the canvas on the floor, dip your brush in the paint, close your eyes and move your hand in sharp jerking motions over the canvas. Technically not random, but the distribution of paint isn’t known to the artist until they look. Or, you could consider John Cages music, like I mentioned, or any of the other artists that have incorporated randomness into their art. https://artgallery.yale.edu/collections/objects/112012 might be one example.
It seems needlessly complex and exclusionary to say that every piece of the output must tie back to human intent, when so many artists try to not do that, and the creation itself was an act of intention.
That’s not actually true. Some tools don’t give you the option to do otherwise, but the image generator is entirely deterministic. It is, after all, a computer.
The general process is that it will generate an image consisting of psudo random static based on an input seed (often the current time), and then use the CNN as a denoising algorithm on the image, using the prompt as a guide for how it “fixes” the image.
Hold the seed steady and it will produce the same result for the same prompt each time. You can also disregard the seed and provide your own static or not static image and it’ll deterministically try to correct it.
It’s as much a carrier for the humans intent as anything else that a person controls to produce an output.
With the “mystical human only thing” I wandered away from my point a little. The part about the threshold for art and intelligence was more to the point. The threshold for intelligence is barely different from “accepts information, which is utilized for a purpose”. A thermostat fits the bill. Everything else is a matter of degree or related phenomenon like sapience and sentience that synergistically enhance intelligence. As I said from the beginning, modern AI lacks those things.
In my opinion, art only requires that something be presented or perceived as art. Otherwise you start to run into issues where people say a certain tool disqualifies something as art, or that it’s not art because they don’t like or get it.
I’m curious if the awareness that the image generator is deterministic changes your thoughts, or that the inputs don’t need to be words.
I’m not going to continue this conversation, but I will say my computer science professor (years ago, before this current “AI” trend) who taught the AI course used the term AI to refer to AGI if nothing else was specified. From an academic sense, that’s what it meant if you didn’t say anything else.
That feels like a quirk of your professor. You should look into using a definition used by the rest of the field. Your usage makes it seem like your listening too much to people who are trying to hype AI, and not enough to people who are building it or invented the field.