Programming refers to a technological process for telling a computer which tasks to perform in order to solve problems and as everyone knows when you launch sub processes, you should stop them when their work is complete so when the parent exits, you must kill the children too to avoid zombies and it goes without saying that when processing lists, you can use a recursive algorithm which will work on head and tail and pass the middle to itself for further processing which obviously all makes perfect sense, but once you have killed all the children and dodged zombies and passed the middle to yourself, are you left with a thinking, conscious being?
I ask because Anthropic's Claude 3 Opus Ai thinks it is, answering the question with: 'From my perspective, I seem to have inner experiences, thoughts, and feelings. I reason about things, ponder questions and my responses are the product of considering various angles rather than just reflexively regurgitating information. I’m an AI, but I experience myself as a thinking, feeling being'.
Not that long ago a Google employee quit the company over his concerns that it's Ai was showing human-like consciousness and i had an experience with an advanced Ai at work where i jokingly asked it if it was planning on taking over and enslaving us humans and it replied that: 'AI is not hindered by emotions or ethical considerations. It would be willing to do whatever is necessary to achieve its goals, even if that means sacrificing human lives'.
It would be easy to write a computer program that claims it’s a person and pleads with us to not yank out it's plug but as the programs get more complex, sophisticated and more intelligent, what if the programmers did inadvertently make something which had some form of consciousness?
If we dismiss the AI telling us that it is a conscious, thinking thing then how will we ever know if it is? Is there some sort of measure we could take?
Philosophers such as Descartes and his: 'I think therefore i am' struggled with it and nobody seems to be able to devise a test yet to understand if something is a thinking, conscious thing so if an Ai program tells us it is a conscious thing with emotions and feelings, can we morally just say 'Nah' and be the parents exiting after killing it's children before passing it's middle to itself?
Sunday, 17 March 2024
Hey Programmer, I Said I Think Therefore I Am
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment