Wednesday, June 30, 2010

Discussion with Robert

THIS IS AN EMAIL THREAD-- PLEASE READ FROM BOTTOM TO TOP
--------------------------------------------------------

On Fri, May 28, 2010 at 1:16 PM, asuarez510@gmail.com wrote:

Quick response cause I'm in a meeting...

Suicidal robot: asimov's three laws almost require it. A robot must obey instructions from a human above its own existence. This would never allow a bot to behave humanly, since self preservation is the strongest of human instincts.

Sent from my Verizon Wireless Phone


----- Reply message -----

From: "Robert Sobrado"

Date: Fri, May 28, 2010 12:44 pm
Subject: Chat with andrea

To: "Andrea Suarez"


I'd love to write a book with you ;). Your job is amazing what you do and the complexity of the joints you are recreating are no simple task. The wrist is an injury that never fully heals. If you guys succeed that will really make a difference in the quality of life of many people.

Back to AI. First, I never give much credit to claims of any research institute. Second, ten years ago computers got faster at an alarming exponential rate... today all we can do is add more processors a practice that has been around for almost 20 years. Third "Programming a formula that changes based on certain outcomes." The outcome may never be perfect. If for example we built a Mountain Adventure Robot of Fail (MARF.) MARF is built to run up and down a mountain as fast as possible looking for an optimal path with ground strong enough to support it moving at some maximum speed. MARF starts in a rainy Spring, his path from a wet environment will change in a dry summer, and in Winter every thing it knows will reverse and his top speed will drastically change when running down hill. There will never be one all encompassing correct program for him to run with. Only situational ones. How well he detects his environment may improve his chances at deciding what to do but he will be at a constant state of failure and improvement. The goal of an optimum path is impossible. MARF was designed to fail. A modern program can repeatedly fail and never know it, same can be said for most people. If computers ever realize all they ever can hope to do is Fail. We will create an army of suicidal robots. Humans and animals can cope with failure a machine will only follow it's programming.

This is fun! I love playing Devils Advocate.

On Fri, May 28, 2010 at 11:11 AM, Andrea Suarez wrote:

You said there is no such thing as a code that can learn- But what about a program that adapts according to the results of past attempts? That could certainly exist now. Isn't that learning? Programming a formula that changes based on certain outcomes. Doesn't repeat bad results, looks to repeat good ones (the of course, people do repeat bad results!). Kind of like an optimization. But then, what does it consider to be "good results," because that certainly varies treemendously from human to human. You would start with the same exact program, and achieve different results and different decisions from each machine, based on what decisions it is faced with, and the outcomes of each decision.
And your assumptions are based on existing technology- what about 50 years from now? There are already research institutes that claim they are getting closer and closer to accomplishing what they call "mind uploading." The brain is also a series of electrical impulses (mind you, a much more complex one). Take your smartphone example, multiplied by a huge factor. How come the brain is able to adapt and learn, but artificial intelligence is not? Is it a matter of further complexity of existing circuits, rather than a lack of appropriate technology.

Very interesting stuff!! Want to write a book with me? lol

On Fri, May 28, 2010 at 11:58 AM, Robert Sobrado wrote:


A lot can be attributed to, as you said, "our decisions are based on a subconscious formula involving a certain initial moral code and the result of past decisions." But that is incomplete. You don't give enough credit to personality and stimulus provided by immediate surroundings. A Pig headed person will attempt the same experiment multiple times and expect different results. Like extended CPR on a loved one. A puppy can fall in a pool and for the rest of its life fear pools where a human can overcome a similar experience. A guy can hold a door open for a women to pass 99% of the time but one time he can be lost in thought an just not notice the old lady struggling to open a door with a coffee pot in her hand. Two identical girls can grow up in the exact same environment doing the exact same things together their whole lives and not like the same qualities in a man. I have programmed autonomy there are no real mysteries left unsolved. It is a computer it does what you tell it to do without fear or hesitation. There is no such thing as code that can learn. There only exists code that can copy and repeat. If you teach a program how to walk on a flat surface and how to climb a vertical one it has no chance of figuring out how to climb down from a cliff. These are things babys do before they learn to speak. I am a big Nerd. If something is simple there is less room for failure. All modern technology can be broken down into very simple processes. A smart phone is just a fancy radio with a few micro processors that transfer and collect data by V=IR. With enough of them, signals, working simultaneously you can do some amazing things.


On Fri, May 28, 2010 at 10:24 AM, Andrea Suarez wrote:

haha! We're not making people into frakenstein over here! we try to rebuild your own broken bones (in tiny little parts after, say, an ATV accident) with Ti implants and screws to avoid a full prosthesis and additional surgery every 10 years. We most recently finished the elbow, and are moving onto the wrist now. Right out of surgery you can't even feel the implants- its your own joints you're using.

Funny you mention the robocop thing- I actually did a series of blogs for a class last year on the integration of bionic bots into society as workers, etc that I may be turning into a book. The professor showed interest in publishing it if I continue developing it. Figured I'd give it a shot. Right now I'm looking into the possiblities and consequences of programming free will and psychopathic robots (after all, our decisions are based on a subconscious formula involving a certain initial moral code and the result of past decisions, right?)

No comments:

Post a Comment