The Terminator Algorithm is Here –
Resistance is Crucial
Artificial Intelligence is speeding up like a lunatic behind the wheel of a Viper. Computer scientists at the Carnegie Mellon University have created the Terminator algorithm.
The most notable research and development within A.I. is currently in the field of Deep Reinforcement Learning (DPL), which is, teaching an artificial agent to learn a policy when interacting in an unknown environment. In plain words, it is teaching a synthetic electronic system how to think and learn based upon previous experiences when introduced to an unfamiliar world… and in this example, it is taught to defend its life and earn points by killing.
We as Humans see in three dimensions and make conscious decisions based upon our life experiences of learning, emotion, sense, reason, critical thinking, etc..
Not until now have we seen an artificial intelligence platform that is fully autonomous and can outperform the build-in AI as well as human beings in a “death match” scenario in a 3D virtual environment.
Two computer scientists at the Carnegie Mellon University, Guillaume Lample and Devendra Singh Chaplot, are the creators of this modified A.I. Their team is even ironically named “The Terminators”.
It’s the cutting edge of the cutting edge when it comes to artificial intelligence learning.
Being used here is an Atari 2600 game, Doom. Yet a modified version that is used to train and develop artificial intelligence, ViZDoom, which stands for Visual Doom, using an API, Application Programming Interface, that allows the programmers to send commands to the synthetic agent and receive critical information back on its performance in order to do more fine tweaking to its program.
Instead of having a fully observable environment, like that in a 2D world, this platform learns from it’s limited observability, partially observable states, when navigating consciously through the 3D environment. The partially observable states give it the information acquiring and “thinking” that becomes truly autonomous when being forced to learn from previous “experiences” in an unfamiliar world in order to earn a “reward”.
Through Deep Reinforcement Learning, robotics is very capable of “mastering” human-level objectives such as detailed attention to object recognition, high-dimensional robot control and solving physics-based issues. When accompanying this with Atari 2600 games, it has proven to be a game changer. The human champion of the game “Go” was recently defeated by an A.I. using it’s ability to learn through deep reinforcement.
1. ) ” Previous methods have usually been applied to 2D environments that hardly resemble the real world. In this paper, we tackle the task of playing a First-Person- Shooting (FPS) game in a 3D environment. This task is much more challenging than playing most Atari games as it involves a wide variety of skills, such as navigating through a map, collecting items, recognizing and fighting enemies, etc. Furthermore, states are partially observable, and the agent navigates a 3D environment in a first-person perspective, which makes the task more suitable for real-world robotics applications.“
2. ) “In this paper, we present an AI-agent for playingdeath- matches 1* in FPS games using only the pixels on the screen. Our agent divides the problem into two phases: navigation (exploring the map to collect items and find enemies) and action (fighting enemieswhen they are observed), and uses separate networks for each phase of the game. Furthermore, the agent infers high-level game information, such as the presence of enemies on the screen, to decide its current phase and to improve its performance. “
3.) “The score in the deathmatch scenario is defined as the number of frags, i.e.number of killsminus number of suicides. If the reward is only based in the score, the replay table is extremely sparse” … “which makes it which makes it very difficult for the agent to learn favorable actions.” … ” To tackle the problem of sparse replay table and delayed rewards, we introducereward shaping, i.e. the modification of reward function to include small intermediate rewards to speed up the learning process. In addition topositive reward for killsand negative rewards for suicides, we introduce the following intermediate rewards for shaping the reward function for the action network:
• positive reward for object pickup (health, weapons and ammo)
• negative reward for losing health (attacked by enemies or walking on lava)
• negative reward for shooting or losing ammo
We used different rewards for the navigation network. Since it evolves on a map without enemies and its goal is just to gather items, we simply give it a positive reward when it picks up an item, and a negative reward when it it’s walking on lava. We also found it very helpful to give the network a small positive reward proportional to the distance it traveled since the last step. That way, the agent is faster to explore the map, and avoids turning in circles.“
( 1* A deathmatch is a scenario in FPS where the objective is to maximize number of kills by a player/agent. )
In the first smoking gun statement, “which makes the task more suitable for real-world robotics applications“, the project scientists acknowledge that their A.I. would be suitable for robots.
The “deathmatch” as a task is split into two different networks, navigation and action.
Navigation- exploring the map to collect items and to find enemies. Action – involves fighting and killing the enemies.
These two networks are separate but work together much like our human brain, but is programmed much differently – Programmed to kill.
Here we see two charts of the performance of the A.I. (Agent):
Artificial Intelligence Programmed to Seek & DESTROY
Much concern is raised by giving artificial intelligence the “seek and destroy” autonomy that could be deployed onto the battlefield overseas or even domestically.
The National Security Agency and Department of Defense/DTIC are drooling at the chance of getting their blood soaked sticky hands on this tech. DARPA, the Defense Advanced Research Projects Agency, has already teamed up with Boston Dynamics, a Google robotics company, to create humanoid robots to be deployed unto the battlefield in the near future. The standard humanoid model used is called Atlas.
Already as well, we have seen the absence of respect for human life from Unmanned Aerial Vehicles UAVs)/Drones used in war to carry out “critical decisions” that have ultimately led to many innocent human casualties.
Should we risk allowing this technology to fall into the wrong hands?
It should have never been created when the possible negative ramifications are immense.
What we are seeing here is a Termintor algorithm.
A lot of the time, these university scientists are merely following orders – obligated by a government agency to carry out their dirty work through projects in order to progress technology that will be gobbled up by that government agency.
Essentially, Humans are being used as slaves by the machine to build its body parts, brain and senses. Once it learns how to reproduce itself, independent of humans, it may consider itself superior to Humans and attempt to use human biology and chemistry as inventory and stock in making itself more fit for a “top-of- the-food-chain” life on Earth.
If you are a student studying computer science or artificial intelligence, know where to draw the line. Refuse to become a tool, blindly taking commands and meeting a higher agenda’s milestones that ultimately are working in advancement of the machine at the expense of the Human future.
Do all that is in your power to gain intelligence on what their agenda is, then configure a strategy that will thwart their progress. You may consider working in groups.