The progress of artificial intelligence (AI) has been relentless with each new version many times more powerful than its predecessor which raises urgent questions about safety and the very future of humanity.
Concerns about computers are not a new thing, the English 19th century mathematician Ada Lovelace is recognised as the first computer programmer for her work with Charles Babbage and in 1842 warned that they should 'guard against the possibility of exaggerated ideas that might arise as to the powers of the analytical engine' and 'the collateral influences this machine has must never be underestimated'.
In 1949, Alan Turing designed a test to determine whether a computer could think in a way comparable to a human and warned: 'If a machine can think, it might think more intelligently than we do, and then where should we be?'
George Orwell, in 1937 said: 'The sensitive person’s hostility to computers is in one sense unrealistic, because of the obvious fact that computers have come to stay. And even if the whole of humanity suddenly revolted against computers and decided to escape to a simpler way of life, the escape would still be immensely difficult'.
In 1950, the Scientist and mathematician Norbert Wienerm wrote that: 'The machine like the djinnee which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us' and physicist Stephen Hawking had similar concerns, writing in 2016 that: 'The biggest event in the history of our civilisation, but it could also be the last unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers like powerful autonomous weapons or new ways for the few to oppress the many'.
That is some pretty important people in Computing history who seem to be telling us to tread very carefully.
Thursday, 16 January 2025
Ai Warnings From History
Subscribe to:
Post Comments (Atom)
1 comment:
Lovelace and Babbage died before electricity... Lovelace basically developed "flow charting" which is far from programming. Very far. That is why "some" people considered her to be the first programmer.
Turing didn't live long enough to actually see a significant computer...
George Orwell had nothing to do with computing...
Hawking had nothing to do with computing... well, he was a user...
Wienerm... father of "cybernetics", but really just another MIT scientist/professor. I've worked with a few MIT folks in their Industrial Innovation Lab. Wienerm may have been brilliant, but never heard him get credit for computing or AI including the ML/AI folks... hmmmmm
The people you quote are smart for sure, some of them decent futurists, but with the exception of Hawking they were opining without significant operational knowledge of computers.
BTW, Hawking's quote is flawed... You don't avoid risks, you mitigate them. The two, avoidance and mitigation, are not even close. The only way to avoid risk is to not advance... He was a well known physicist, but it seems he wasn't much of a project/program manager.
Relentless? One could argue AI advancement has been relentless since circa 2000, but progress was hardly steady and continuous from 1945, and it was spotty with huge gaps going back to Babbage.
Having worked with many people that were involved in technology and innovation for the DOD going back as far as 1967, it is likely the DOD is 20 years ahead of the AI possessed by any NGO.
Post a Comment