by everything on Tue Aug 01, 2017 8:02 am
From only reading the article, it sounds like they programmed two "AIs" who were supposed to "talk" to each other and "negotiate" something using English language. They give these programs some kind of goal (probably optimize the negotiation rewards), but no guidance on how to go about getting the goal.
A classic example floating around lately is teaching an AI using "deep learning" to play the old video game Breakout. The AI experiments and looks like a horrible human player (worse than horrrible) as it does trial and error. After a few minutes it wins Breakout with stunning efficiency using moves humans wouldn't do but that work for the computer.
It doesn't sound like this Facebook example (a game, really) is anything insidious if it's like the Breakout example. Further down in the article, they say that the Google Translate algorithms also invented an intermediate language that humans don't understand that is used in-between the human language inputs/outputs.
That said, Elon Musk giving us warnings should probably be taken seriously since he's pushing Tesla, SpaceX, Hyperloop to much faster progress than anyone else would've really expected and seems to be on the cutting edge of stuff.
I'm not an expert but roughly "machine learning" is things like regression (fitting a line to data points), logistic regression (fitting a curve), decision trees (binary rules that classify/predict), and "AI" is a program that combines all that ML and actually makes some decisions, then adjusts all the automatic ML to try to optimize the decisions for some outcome constraints/goals.
amateur practices til gets right pro til can't get wrong
/ better approx answer to right q than exact answer to wrong q which can be made precise /
“most beautiful thing we can experience is the mysterious. Source of all true art & science