Steve James wrote:the program doesn't understand what it's saying
This.
Steve James wrote:the program doesn't understand what it's saying
Dmitri wrote:Steve James wrote:the program doesn't understand what it's saying
This.
As an AI language model, I don't have subjective experiences or emotions like humans do, so I don't "understand" things in the same way that humans do. However, I'm designed to generate responses to input based on patterns I've learned from vast amounts of data. So while I don't have a personal understanding of my own words, I can recognize patterns and meanings based on my training data and use that to generate responses.
Steve James wrote:The thinking jobs of the future will be AI prompting jobs.
I dunno. If someone asks ChatGPT what gravity is, it collate all the available information in its database and put it into language. That language itself may be wrong because the program doesn't understand what it's saying. ChatGPT is known to make mistakes, even in things like math. Or it makes statements that any (most) humans would know was wrong.
Imo, a program can't have more knowledge than the collection of human knowledge entered into its database. Humans can then ask questions. The questions, themselves, increase the database. However, ask ChatGPT "What is outside the edge of the universe?" Or "What came before the beginning of the universe?" Or, "Who's stronger, Zeus or Jahweh?" And, after/if it provides an answer, do humans accept it? What if they don't like or believe it?
I still say the only worry is people. "Fear, ignorance, anger/hatred" is a vicious cycle. Maybe they'll be able to program an AI to have feelings, and then be like us. The same fear people have of alien extraterrestrials (or undocumented migrants). They're going to come here... and do what we've done, would do, or do? ChatGPT and AI, in general, are becoming a new "other." Just sayin.
Are AI going to eliminate jobs? Imo, they're going to eliminate the need for a lot of "work." If that means writing different types of letters or reports, it's not a bad thing. This can mean that workers who do those things can be more productive -or it can mean that their employers fire them. Meh, same-o-same, those who fear AI aren't wrong. It's just that it's people who'll fuck it up or not.
Do you understand what you are saying?
So if you're looking at now and thinking this is as far as it's going to go, you need to look a little further down the road. They're predicting models a million times more powerful within ten years.
In 2020, OpenAI released an even more advanced model called GPT-3, which I am a part of. GPT-3 is capable of generating even more impressive text than its predecessor, thanks to its massive size and improved training techniques. It has been trained on an enormous amount of data, including text from books, websites, and other sources, making it one of the most sophisticated language models in existence.
Steve James wrote:Do you understand what you are saying?
Shucks. I don't know if I'm not dreaming or conscious. But, asking an AI whether or not it understands itself is irrelevant if it can be wrong. Ask if it's self-aware, and it'll output a human's answer. I.e., "How can I know?" If asked about a human quality, it will always answer "I don't know."So if you're looking at now and thinking this is as far as it's going to go, you need to look a little further down the road. They're predicting models a million times more powerful within ten years.
I'm more interested in the effects of human predictions about AI. Sure, you can cite how much technology has evolved. My grandmother called the telephone the "witch cord." But, the argument that "in the future, it'll be able to do a million times more" doesn't work for me. For one thing, we already rely on computers. The only way to eliminate AI would be to eliminate computers. So, what they'll be able to do in 10 years is inevitable -perhaps even if humans decided to do something about it.In 2020, OpenAI released an even more advanced model called GPT-3, which I am a part of. GPT-3 is capable of generating even more impressive text than its predecessor, thanks to its massive size and improved training techniques. It has been trained on an enormous amount of data, including text from books, websites, and other sources, making it one of the most sophisticated language models in existence.
There was its own answer. I won't bother to worry until it deliberately lies. I.e., from a philosophical pov, it considers the user an object to be manipulated. Otoh, that quality could probably be programmed in. So, as I've said. I'm only worried about the people. I'm concerned about the data that humans choose to input, and what they'll refuse to input. I can see it being misused by governments and evil actors. If we're doomed, it's not because of AI. Plus, I don't see why AI wouldn't be more interested in helping if it even could be self-aware. I think that just projection.
What would you do if you had all the data ChatGPT has or all the data in the world?
Steve James wrote:The Ohio train derailment is a good example. If an AI/computer were asked how to avoid rail accidents such as that, the decision would be based on the biases of the programmer. I.e., it could be programmed goal is to prioritize saving human life or something else, such as maximizing profit. Or there could be a cost-benefit analysis showing that fixing all brakes would be more expensive than paying off the insurance from any accident. Oops, the latter actually happened -in spite of the company's knowledge.
The same was true for the tobacco industry. Actually, it's true for every major industry where there is a known or potential social health risk. I just heard a representative from Alaska seriously ask/argue that abused children who died were a benefit to society because society doesn't have to pay for them later. My point is that humans will make these decisions. That's why I ask a human what they'd do with the "intelligence" of a computer. The degree of intelligence doesn't matter. So, imo, it's valid to fear how AI will be used. However, that's different from fearing AI. I worry about nuclear weapons, but not because they use the power of the atom. Otoh, I'm skeptical of nuclear reactors because humans screw up or will deliberately try to sabotage. There is no evil in this world that is not human or of human origin.
Humans will use this tool to exploit and replace other humans.
Steve James wrote:Humans will use this tool to exploit and replace other humans.
Right. Humans are the issue. AI isn't human. I have no problem with using AI. I don't think it (of itself) wants to use me. I don't buy the idea that AI has any intention for itself.
Sure. Data collection is used all the time, everywhere. No one needed AI to do it, though AI probably makes it easier.
Of course, using ChatGPT or any other AI automatically adds to its database about you/us and everyone else.
Steve James wrote:AI won't put good writers out of business. Imo, too many works of great literature written are already available. Of course, the AI will have access to all of them. It'd be interesting to ask an AI to write a "better" play or story than ________. I'm sure it'd easily output an answer. The question is whether it'd be better to any human reader.
Users browsing this forum: No registered users and 36 guests