Read the Beforeitsnews.com story here. Advertise at Before It's News here.
Profile image
By The Daily Sheeple
Contributor profile | More stories
Story Views
Now:
Last hour:
Last 24 hours:
Total:

Minority Report? U.S. Warned — Cops Are Already Using AI to Stop Crimes BEFORE They Happen

% of readers think this story is Fact. Add your two cents.


Editor’s Note: We are officially living in Minority Report, but without all the cool futuristic gadgetry. Great.

by Claire Bernish

Pre-crime, a term coined by science fiction author Philip K. Dick and loosely described as the use of artificial intelligence to detect and stop crime before it happens, has become a terrifying reality — and will likely be business-as-usual for police in just 15 years.

“Cities have already begun to deploy AI technologies for public safety and security,” a team of academic researchers wrote in a new report titled Artificial Intelligence and Life in 2030. “By 2030, the typical North American city will rely heavily upon them. These include cameras for surveillance that can detect anomalies pointing to a possible crime, drones, and predictive policing applications.”

First in an ongoing series for the Stanford University-hosted One Hundred Year Study on Artificial Intelligence (AI 100), the report is intended to spark debate on the benefits and detriments of AI’s growing presence in society — and, as in the area of law enforcement, the removal of the human factor won’t necessarily end well.

As the academics point out, for example, AI already scans and analyzes Twitter and other social media platforms to identify individuals prone to radicalization with the Islamic State — but even that seemingly well-intentioned use expanded drastically.

“Law enforcement agencies are increasingly interested in trying to detect plans for disruptive events from social media, and also to monitor activity at large gatherings of people to analyze security,” the report notes. “There is significant work on crowd simulations to determine how crowds can be controlled. At the same time, legitimate concerns have been raised about the potential for law enforcement agencies to overreach and use such tools to violate people’s privacy.”

Police predicting crimes before they’re committed presents obvious risks to more than just people’s privacy. Indeed, the report warns of the possibility artificial intelligence could cause law enforcement to become “overbearing and pervasive in some contexts,” particularly as technology advances and is applied in different fields.

While “AI techniques — vision, speech analysis, and gait analysis — can aid interviewers, interrogators, and security guards in detecting possible deception and criminal behavior,” its possible application in law enforcement monitoring by surveillance camera, for instance, presents a remarkable capacity for abuse.

Imagine police CCTV cameras zeroing in on an individual who appears out of place in a certain neighborhood — AI might conclude they intend to burglarize a business or residence and trigger the deployment of officers to the scene — even if that person simply lost their way or just went for a walk in a new area. Were we not currently in the midst of an epidemic of violence perpetrated by law enforcement, that error wouldn’t be life-threatening — but the police brutality aspect must be considered in the removal of the human element in pre-crime.

Besides restricting freedom of movement and potentially escalating a non-criminal situation into a deadly one, the assumptions made about a person’s presence in an area can have potentially deleterious effects on both the person and the neighborhood.

As police anti-militarization advocate and author Radley Balko reported for the Washington Post in December, several cities have begun sending letters to people simply for having visited neighborhoods known to police — but not established in a court of law — as high-prostitution areas. Such assumptions embarrassingly alienate the innocent and legally-guilty alike, but also further stereotype whole neighborhoods — as well as residents — rather than addressing the issue of prostitution, itself.

“Machine learning significantly enhances the ability to predict where and when crimes are likely to happen and who may commit them,” the report states. “As dramatized in the movie Minority Report, predictive policing tools raise the specter of innocent people being unjustifiably targeted. But well-deployed AI prediction tools have the potential to actually remove or reduce human bias, rather than enforcing it, and research and resources should be directed toward ensuring this effect.”

As positive as that sounds, the removal of human bias and judgment is a rather pronounced double-edged sword. While that element undoubtedly stands at the core of increasing police violence, machine-assisted preconception sends officers to address a situation under the assumption a criminal act is imminent — regardless of that assumption’s veracity.

However sunny a picture the academics paint about artificial intelligence in law enforcement, one of the largest experiments in AI-assisted policing in the United States already proved to be an astonishing failure.

Beginning in 2013, the Chicago Police Department partnered with the Illinois Institute of Technology to implement the Strategic Subjects List, which “uses an algorithm to rank and identify people most likely to be perpetrators or victims of gun violence based on data points like prior narcotics arrests, gang affiliation, and age at the time of last arrest,” Mic reported in December 2015. “An experiment in what is known as ‘predictive policing,’ the algorithm initially identified 426 people whom police say they’ve targeted with preventative social services.”

But rather than proving efficacy in preventing violent crime, the experiment failed miserably.

As the American Civil Liberties Union criticized, Chicago Police have been less than transparent about who ends up on the list and how the list is actually being used. And despite the claim social services would be deployed to address underlying issues thought to predict future criminal activity, that has not been the case.

Indeed, RAND Corporation’s study of the Strategic Subjects List found those unfortunate enough to be identified by the algorithm were simply arrested more often. Although study authors couldn’t conclude precisely why this happened, it appears human bias — as mentioned above — plays a predictably major role.

“It sounded, at least in some cases, that when there was a shooting and investigators went out to understand it, they would look at list subjects in the area and start from there,” lead author Jessica Saunders told Mic.

Chicago Police had implemented a newer version of the list by the time RAND’s study was published, but several issues had yet to be addressed — among them, the lack of guidance given to officers on how to interact with listees, including which social services to deploy. Generally, the study discovered, police simply increased their interaction with target subjects — a factor known to contribute to police violence and curtailment of civil rights and liberties.

“It is not at all evident that contacting people at greater risk of being involved in violence — especially without further guidance on what to say to them or otherwise how to follow up — is the relevant strategy to reduce violence,” the study stated, as cited by Mic.

But issues with AI prediction aren’t held to just the government’s executive branch — criminal courts across the country have been using an algorithm called Northpointe, “designed to predict an offender’s likelihood to commit another crime in the future” — but its application, like Chicago’s, hasn’t gone smoothly.

Gawker reported in May this year [emphasis added]:

ProPublica published an investigation into Northpointe’s effectiveness in predicting recidivism … and found that, after controlling for variables such as gender and criminal history, black people were 77 percent more likely to be predicted to commit a future violent crime and 45 percent more likely to be predicted to commit a crime of any kind. The study, which looked at 7,000 so-called risk scores issued in Florida’s Broward County, also found that Northpointe isn’t a particularly effective predictor in general, regardless of race: only 20 percent of people it predicted to commit a violent crime in the future ended up doing so.”

Whatever hopes the Stanford report glowingly offers for the potential uses of artificial intelligence, policing — and the criminal justice system, in general — would benefit from further advances and research prior to more widespread active implementation. Hastily applied science, when not thoroughly tested or possible repercussions exhaustively debated, has a penchant for egregious unintentional consequences down the line.

Although the report notes “the technologies emerging from the field could profoundly transform society for the better in the coming decades” — it’s imperative to realize the likelihood that transformation could as easily be for the worse.

Delivered by The Daily Sheeple

We encourage you to share and republish our reports, analyses, breaking news and videos (Click for details).


Contributed by The Free Thought Project of thefreethoughtproject.com.

The Free Thought Project is dedicated to holding those who claim authority over our lives accountable.


Source: http://www.thedailysheeple.com/minority-report-u-s-warned-cops-are-already-using-ai-to-stop-crimes-before-they-happen_092016


Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world. Anyone can join. Anyone can contribute. Anyone can become informed about their world. "United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.


LION'S MANE PRODUCT


Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules


Mushrooms are having a moment. One fabulous fungus in particular, lion’s mane, may help improve memory, depression and anxiety symptoms. They are also an excellent source of nutrients that show promise as a therapy for dementia, and other neurodegenerative diseases. If you’re living with anxiety or depression, you may be curious about all the therapy options out there — including the natural ones.Our Lion’s Mane WHOLE MIND Nootropic Blend has been formulated to utilize the potency of Lion’s mane but also include the benefits of four other Highly Beneficial Mushrooms. Synergistically, they work together to Build your health through improving cognitive function and immunity regardless of your age. Our Nootropic not only improves your Cognitive Function and Activates your Immune System, but it benefits growth of Essential Gut Flora, further enhancing your Vitality.



Our Formula includes: Lion’s Mane Mushrooms which Increase Brain Power through nerve growth, lessen anxiety, reduce depression, and improve concentration. Its an excellent adaptogen, promotes sleep and improves immunity. Shiitake Mushrooms which Fight cancer cells and infectious disease, boost the immune system, promotes brain function, and serves as a source of B vitamins. Maitake Mushrooms which regulate blood sugar levels of diabetics, reduce hypertension and boosts the immune system. Reishi Mushrooms which Fight inflammation, liver disease, fatigue, tumor growth and cancer. They Improve skin disorders and soothes digestive problems, stomach ulcers and leaky gut syndrome. Chaga Mushrooms which have anti-aging effects, boost immune function, improve stamina and athletic performance, even act as a natural aphrodisiac, fighting diabetes and improving liver function. Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules Today. Be 100% Satisfied or Receive a Full Money Back Guarantee. Order Yours Today by Following This Link.


Report abuse

    Comments

    Your Comments
    Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

    MOST RECENT
    Load more ...

    SignUp

    Login

    Newsletter

    Email this story
    Email this story

    If you really want to ban this commenter, please write down the reason:

    If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.