Online Trust Alliance (OTA) Executive Director and President Craig Spiezle testified today before the U.S. Senate’s Homeland Security and Governmental Affairs Permanent Subcommittee on Investigations, outlining the risks of malicious advertising, and possible solutions to stem the rising tide.
“Today, companies have little, if any, incentive to disclose their role in or knowledge of a security event, leaving consumers vulnerable and unprotected for potentially months or years, during which time untold amounts of damage can occur,” said Spiezle. “Failure to address these threats suggests the needs for legislation not unlike State data breach laws, requiring mandatory notification, data sharing and remediation to those who have been harmed.”
It is important to recognize there is no absolute defense against a determined criminal. At the hearing, OTA proposed incentives to companies who adopt best practices and comply with codes of conduct.
Spiezle emphasized that these companies “should be afforded protection from regulatory oversight as well as frivolous lawsuits. Perceived anti-trust and privacy issues must be resolved to facilitate data sharing to aid in fraud detection and forensics.”
The US Office of Naval Research this week offered a $7.5m grant to university researchers to develop robots with autonomous moral reasoning ability.
While the idea of robots making their own ethical decisions smacks of SkyNet – the science-fiction artificial intelligence system featured prominently in the Terminator films – the Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications.
+More on Network World: Quick look: Google’s self driving car+
The idea behind the ONR-funded project will isolate essential elements of human moral competence through theoretical and empirical research, and will develop formal frameworks for modeling human-level moral logic. Next, it will implement corresponding mechanisms for moral competence in a computational architecture. Once the architecture is established, researchers can begin to evaluate how well machines perform in human-robot interaction experiments where robots face various dilemmas, make decisions and explain their decisions in ways that are acceptable to humans, according to Selmer Bringsjord, professor and department head of the Cognitive Science Department at Rensselaer who along with resechers from Brown, Yale and Georgetown will share the grant.
The US Department of Defense forbids use of lethal, completely autonomous robots. However, researchers say that semi-autonomous robots will not be able to choose and engage particular targets or specific target groups until they are selected by an authorized human operator.
According to ONR cognitive science program director Paul Bello even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, progress is being made to incorporate more automation at a faster pace. “Even if such systems aren’t armed, they may still be forced to make moral decisions.” Bello also noted in an interview with DefenseOne.com that in a catastrophic scenario, the machine might have to decide who to evacuate or treat first.
In a press release, Bringsjord said that since the scientific community has yet to mathematize and mechanize what constitutes correct moral reasoning and decision-making, the challenge for his team is severe.
In Bringsjord’s approach, all robot decisions would automatically go through at least a preliminary, lightning-quick ethical check using simple logics inspired by today’s most advanced artificially intelligent and question-answering computers. If that check reveals a need for deep, deliberate moral reasoning, such reasoning is fired inside the robot, using newly invented logics tailor-made for the task. “We’re talking about robots designed to be autonomous; hence the main purpose of building them in the first place is that you don’t have to tell them what to do,” Bringsjord said.
“When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite ruleset created ahead of time by humans can anticipate every possible scenario in the world of war.”
For example, consider a robot medic generally responsible for helping wounded American soldiers on the battlefield. On a special assignment, the robo-medic is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured femur. Should it delay the mission in order to assist the soldier?
If the machine stops, a new set of questions arises: The robot assesses the soldier’s physical state and determines that unless it applies traction, internal bleeding in the soldier’s thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier extreme pain?
Bringsjord and others are preparing to demonstrate some of their initial findings at an Institute of Electrical and Electronics Engineers (IEEE) conference in Chicago in May. They will there be demonstrating two autonomous robots: one that succumbs to the temptation to get revenge, and another – controlled by the moral logic they are engineering – that resists its vengeful “heart” and does no violence.
Washington (CNN) — Never fear the night of the living dead — the Pentagon has got you covered.
It has also devised an elaborate plan should a zombie apocalypse befall the country, according to a Defense Department document obtained by CNN.
In an unclassified document titled “CONOP 8888,” officials from U.S. Strategic Command used the specter of a planet-wide attack by the walking dead as a training template for how to plan for real-life, large-scale operations, emergencies and catastrophes.
And the Pentagon says there’s a reasonable explanation.
“The document is identified as a training tool used in an in-house training exercise where students learn about the basic concepts of military plans and order development through a fictional training scenario,” Navy Capt. Pamela Kunze, a spokeswoman for U.S. Strategic Command, told CNN. “This document is not a U.S. Strategic Command plan.”
Nevertheless, the preparation and thoroughness exhibited by the Pentagon for how to prepare for a scenario in which Americans are about to be overrun by flesh-eating invaders is quite impressive.
A wide variety of different zombies, each brandishing their own lethal threats, are possible to confront and should be planned for, according to the document.
Zombie life forms “created via some form of occult experimentation in what might otherwise be referred to as ‘evil magic,’ to vegetarian zombies that pose no threat to humans due to their exclusive consumption of vegetation, to zombie life forms created after an organism is infected with a high dose of radiation are among the invaders the document outlines.”
Every phase of the operation from conducting general zombie awareness training, and recalling all military personnel to their duty stations, to deploying reconnaissance teams to ascertain the general safety of the environment to restoring civil authority after the zombie threat has been neutralized are discussed.
And the rules of engagement with the zombies are clearly spelled out within the document.
“The only assumed way to effectively cause causalities to the zombie ranks by tactical force is the concentration of all firepower to the head, specifically the brain,” the plan reads. “The only way to ensure a zombie is ‘dead’ is to burn the zombie corpse.”
There are even contingency plans for how to deal with hospitals and other medical facilities infiltrated by zombies, and the possible deployment of remote controlled robots to man critical infrastructure points such as power stations if the zombie threat becomes too much.
A chain of command from the President on down along with the roles to be played by the State Department and the intelligence community for dealing with the zombie apocalypse are clearly spelled out in the document.
‘Walking Dead’ finale: The biggest reveals
The training document was first reported by Foreign Policy magazine.
This is also not the first time zombies have been used as the antagonist in U.S. government training operations. Both the Centers for Disease Control and the Department of Homeland Security have used the creatures as a vehicle for training their personnel in the past.
Defense officials stress the report in no way signals an invasion of zombies is on the horizon. The only real purpose of the document was to practice how to execute a plan for handling something as large and serious a situation like flesh-eating beings trying to overrun the United States.
And why zombies?
Officials familiar with the planning of it say zombies were chosen precisely because of the outlandish nature of the attack premise.
“Training examples for plans must accommodate the political fallout that occurs if the general public mistakenly believes that a fictional training scenario is actually a real plan,” the document says. “Rather than risk such an outcome by teaching our augmentees using the fictional ‘Tunisia’ or ‘Nigeria’ scenarios used at (Joint Combined Warfighting School), we elected to use a completely impossible scenario that could never be mistaken as a real plan.”
So, practice for the when, where and how to plan for a more likely disaster scenario? Yes. But zombies of all stripes would be well advised to take note of this directive to Strategic Command personnel buried within the document.
“Maintain emergency plans to employ nuclear weapons within (the continental United States) to eradicate zombie hordes.”
In the latest Godzilla movie, which comes out Friday, the reptilian behemoth makes a huge mess of San Francisco. Some Twitter users speculated that the prank was a promotional stunt for the movie.
Amazon, AT&T, Snapchat rated among the least trustworthy with data, EFF finds
The companies ranked poorly in a report by the Electronic Frontier Foundation
By Zach Miners, IDG News Service |
May 15, 2014, 4:15 PM — Amazon, Snapchat and AT&T rank among the least trustworthy technology companies when it comes to how they handle government data requests, according to a report from the Electronic Frontier Foundation.
The nonprofit privacy advocacy group released its fourth annual “Who Has Your Back” report Thursday, ranking trustiworthiness of tech firms based on a variety of criteria, including whether they require a warrant for user data and their publication of transparency reports.
Of the more than two dozen companies ranked, Apple, Credo Mobile, Dropbox, Facebook, Google, Microsoft, Sonic.net, Twitter and Yahoo took top honors, earning the maximum six stars in each category studied.
AT&T and Amazon earned only two stars, while Snapchat was awarded just one.
A wealth of personal information and data is stored with Internet companies, and concerns over the handling of data have skyrocketed in the wake of disclosures about government spying, as well as cyberattacks and companies’ own policies and products.
The report’s findings are based on the actions companies take on matters relating to government user-data demands, as well as their stance on transparency. The report was based on publicly available data and records, and did not look at any secretive anti-surveillance measures the companies may have in place. Responses to national security requests cloaked by a gag order weren’t factored in either.
Companies were assessed based on six criteria: requiring a warrant for data; telling users about government data requests; publishing transparency reports; publishing law enforcement guidelines; fighting for users’ privacy in courts; and publicly opposing mass surveillance.
Following leaks made by former U.S. National Security Agency contractor Edward Snowden, more companies have sought to be more forthcoming in how they handle government demands for data. Companies such as AT&T, Verizon and Comcast issued their first-ever transparency reports during the period that EFF examined, and it’s partly why major companies like Google and Facebook ranked high on the list.
But others haven’t stepped up to the plate as much, according to the EFF. Snapchat earned only one star for publishing law enforcement guidelines, the report said. A Snapchat spokeswoman said the company routinely requires a search warrant when law enforcement comes knocking, but the nature of its service means often there is no content to divulge.
Amazon received two stars for requiring a search warrant and for fighting for users’ privacy in courts.
To develop its report, EFF collaborated with the data analysis company Silk to analyze trends in government access requests.
The EFF characterized the report’s findings as generally positive. “We saw a remarkable improvement in the areas we’ve been tracking,” said Cindy Cohn, legal director at the EFF, with nearly a year’s worth of Snowden leaks helping to lend public attention on the issues.
But researchers also lamented the government’s turtle-like pace in protecting users as the technology industry plows ahead. Even more troubling, the government has relied on legal uncertainties to gain greater access to user data, they said.
“Too often, technology companies are the weak link, providing the government with a honeypot of rich data,” the EFF’s report said.
Zach Miners covers social networking, search and general technology news for IDG News Service. Follow Zach on Twitter at @zachminers. Zach’s e-mail address is firstname.lastname@example.org