Covert channels, on the other hand, are not anticipated by the confinement system designers and are not intended for information transfer at all, for example if the AI has control over the processor cooling fan it can use it to send hidden signals encoded as Morse code. The legitimate channels are designed as a part of the communication protocol integrated into the confinement environment, for example a text terminal. Lampson’s paper has also introduced and defined two types of channels by which information could potentially escape from the confinement environment. Enforcement: The supervisor must ensure that a confined program’s input to covert channels conforms to the caller’s specifications. Masking: A program to be confined must allow its caller to determine all its inputs into legitimate and covert channels.Ĥ. Transitivity: If a confined program calls another program which is not trusted, the called program must also be confined.ģ. Total isolation: A confined program shall make no calls on any other program.Ģ. To address the Confinement Problem Lampson has introduced the Laws of Confinement:ġ. A misbehaving program may well be trapped as a result of an attempt to escape’. any program, if confined, will be unable to leak data. It was originally phrased as: ‘.the problem of confining a program during its execution so that it cannot transmit information to any other program except its caller.We want to be able to confine an arbitrary program. The so-called Confinement Problem (CP) was posed by Butler Lampson in 1973 (Lampson, 1973) as a security challenge to the computer experts. Interestingly, the AI Confinement Problem is not a recent invention and does not have its roots in the singularity movement. The Artificial Intelligence Confinement Problem While it is unlikely that long-term and secure confinement of AI is possible, we are hopeful that the proposed protocol will give researchers a little more time to find a permanent and satisfactory solution for addressing existential risks associated with appearance of super-intelligent machines.Ĭovert channels are not anticipated by the confinement system designers and are not intended for information transfer at all, for example if the AI has control over the processor cooling fan it can use it to send hidden signals encoded as Morse code.ġ. We will evaluate feasibility of presented proposals and suggest a protocol aimed at enhancing safety and security of such methodologies. In this essay we will review specific proposals aimed at creating restricted environments for safely interacting with artificial minds. A common theme in Artificial Intellgence (AI) safety research is the possibility of keeping a super-intelligent agent in a sealed hardware so as to prevent it from doing any harm to humankind. With the likely development of superintelligent programs in the near future, many scientists have raised the issue of safety as it relates to such technology. But is this really enough to make an AI safe? The problem of keeping AI contained, and only doing the things we want it to, is a deceptively tricky one, writes Roman V. “Hopefully, that doesn’t ever happen, but you never know,” says Musk. Elon Musk plans to build his Tesla Bot, Optimus, so that humans “can run away from it and most likely overpower it” should they ever need to.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |