Online Trust Alliance (OTA) Executive Director and President Craig Spiezle testified today before the U.S. Senate’s Homeland Security and Governmental Affairs Permanent Subcommittee on Investigations, outlining the risks of malicious advertising, and possible solutions to stem the rising tide.
“Today, companies have little, if any, incentive to disclose their role in or knowledge of a security event, leaving consumers vulnerable and unprotected for potentially months or years, during which time untold amounts of damage can occur,” said Spiezle. “Failure to address these threats suggests the needs for legislation not unlike State data breach laws, requiring mandatory notification, data sharing and remediation to those who have been harmed.”
It is important to recognize there is no absolute defense against a determined criminal. At the hearing, OTA proposed incentives to companies who adopt best practices and comply with codes of conduct.
Spiezle emphasized that these companies “should be afforded protection from regulatory oversight as well as frivolous lawsuits. Perceived anti-trust and privacy issues must be resolved to facilitate data sharing to aid in fraud detection and forensics.”
“Only a week after the International Day Against DRM, Mozilla has announced that it will partner with proprietary software company Adobe to implement support for Web-based Digital Restrictions Management (DRM) in its Firefox browser, using Encrypted Media Extensions (EME).
The Free Software Foundation is deeply disappointed in Mozilla’s announcement. The decision compromises important principles in order to alleviate misguided fears about loss of browser marketshare. It allies Mozilla with a company hostile to the free software movement and to Mozilla’s own fundamental ideals.
Although Mozilla will not directly ship Adobe’s proprietary DRM plugin, it will, as an official feature, encourage Firefox users to install the plugin from Adobe when presented with media that requests DRM. We agree with Cory Doctorow that there is no meaningful distinction between ‘installing DRM’ and ‘installing code that installs DRM.’
We recognize that Mozilla is doing this reluctantly, and we trust these words coming from Mozilla much more than we do when they come from Microsoft or Amazon. At the same time, nearly everyone who implements DRM says they are forced to do it, and this lack of accountability is how the practice sustains itself. Mozilla’s announcement today unfortunately puts it — in this regard — in the same category as its proprietary competitors.
Unlike those proprietary competitors, Mozilla is going to great lengths to reduce some of the specific harms of DRM by attempting to ‘sandbox’ the plugin. But this approach cannot solve the fundamental ethical problems with proprietary software, or the issues that inevitably arise when proprietary software is installed on a user’s computer.
In the announcement, Mitchell Baker asserts that Mozilla’s hands were tied. But she then goes on to actively praise Adobe’s “value” and suggests that there is some kind of necessary balance between DRM and user freedom.
There is nothing necessary about DRM, and to hear Mozilla praising Adobe — the company who has been and continues to be a vicious opponent of the free software movement and the free Web — is shocking. With this partnership in place, we worry about Mozilla’s ability and willingness to criticize Adobe’s practices going forward.
We understand that Mozilla is afraid of losing users. Cory Doctorow points out that they have produced no evidence to substantiate this fear or made any effort to study the situation. More importantly, popularity is not an end in itself. This is especially true for the Mozilla Foundation, a nonprofit with an ethical mission. In the past, Mozilla has distinguished itself and achieved success by protecting the freedom of its users and explaining the importance of that freedom: including publishing Firefox’s source code, allowing others to make modifications to it, and sticking to Web standards in the face of attempts to impose proprietary extensions.
Today’s decision turns that calculus on its head, devoting Mozilla resources to delivering users to Adobe and hostile media distributors. In the process, Firefox is losing the identity which set it apart from its proprietary competitors — Internet Explorer and Chrome — both of which are implementing EME in an even worse fashion.
Undoubtedly, some number of users just want restricted media like Netflix to work in Firefox, and they will be upset if it doesn’t. This is unsurprising, since the majority of the world is not yet familiar with the ethical issues surrounding proprietary software. This debate was, and is, a high-profile opportunity to introduce these concepts to users and ask them to stand together in some tough decisions.
To see Mozilla compromise without making any public effort to rally users against this supposed “forced choice” is doubly disappointing. They should reverse this decision. But whether they do or do not, we call on them to join us by devoting as many of their extensive resources to permanently eliminating DRM as they are now devoting to supporting it. The FSF will have more to say and do on this in the coming days. For now, users who are concerned about this issue should:
Write to Mozilla CTO Andreas Gal and let him know that you oppose DRM. Mozilla made this decision in a misguided appeal to its userbase; it needs to hear in clear and reasoned terms from the users who feel this as a betrayal. Ask Mozilla what it is going to do to actually solve the DRM problem that has created this false forced choice.
Join our effort to stop EME approval at the W3C. While today’s announcement makes it even more obvious that W3C rejection of EME will not stop its implementation, it also makes it clear that W3C can fearlessly reject EME to send a message that DRM is not a part of the vision of a free Web.
Use a version of Firefox without the EME code: Since its source code is available under a license allowing anyone to modify and redistribute it under a different name, we expect versions without EME to be made available, and you should use those instead. We will list them in the Free Software Directory.
Donate to support the work of the Free Software Foundation and our Defective by Design campaign to actually end DRM. Until it’s completely gone, Mozilla and others will be constantly tempted to capitulate, and users will be pressured to continue using some proprietary software. If not us, give to another group fighting against digital restrictions.”
Free Software Foundation
+1 (617) 542 5942
The Free Software Foundation, founded in 1985, is dedicated to promoting computer users’ right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software — particularly the GNU operating system and its GNU/Linux variants — and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF’s work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.
The US Office of Naval Research this week offered a $7.5m grant to university researchers to develop robots with autonomous moral reasoning ability.
While the idea of robots making their own ethical decisions smacks of SkyNet – the science-fiction artificial intelligence system featured prominently in the Terminator films – the Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications.
+More on Network World: Quick look: Google’s self driving car+
The idea behind the ONR-funded project will isolate essential elements of human moral competence through theoretical and empirical research, and will develop formal frameworks for modeling human-level moral logic. Next, it will implement corresponding mechanisms for moral competence in a computational architecture. Once the architecture is established, researchers can begin to evaluate how well machines perform in human-robot interaction experiments where robots face various dilemmas, make decisions and explain their decisions in ways that are acceptable to humans, according to Selmer Bringsjord, professor and department head of the Cognitive Science Department at Rensselaer who along with resechers from Brown, Yale and Georgetown will share the grant.
The US Department of Defense forbids use of lethal, completely autonomous robots. However, researchers say that semi-autonomous robots will not be able to choose and engage particular targets or specific target groups until they are selected by an authorized human operator.
According to ONR cognitive science program director Paul Bello even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, progress is being made to incorporate more automation at a faster pace. “Even if such systems aren’t armed, they may still be forced to make moral decisions.” Bello also noted in an interview with DefenseOne.com that in a catastrophic scenario, the machine might have to decide who to evacuate or treat first.
In a press release, Bringsjord said that since the scientific community has yet to mathematize and mechanize what constitutes correct moral reasoning and decision-making, the challenge for his team is severe.
In Bringsjord’s approach, all robot decisions would automatically go through at least a preliminary, lightning-quick ethical check using simple logics inspired by today’s most advanced artificially intelligent and question-answering computers. If that check reveals a need for deep, deliberate moral reasoning, such reasoning is fired inside the robot, using newly invented logics tailor-made for the task. “We’re talking about robots designed to be autonomous; hence the main purpose of building them in the first place is that you don’t have to tell them what to do,” Bringsjord said.
“When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite ruleset created ahead of time by humans can anticipate every possible scenario in the world of war.”
For example, consider a robot medic generally responsible for helping wounded American soldiers on the battlefield. On a special assignment, the robo-medic is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured femur. Should it delay the mission in order to assist the soldier?
If the machine stops, a new set of questions arises: The robot assesses the soldier’s physical state and determines that unless it applies traction, internal bleeding in the soldier’s thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier extreme pain?
Bringsjord and others are preparing to demonstrate some of their initial findings at an Institute of Electrical and Electronics Engineers (IEEE) conference in Chicago in May. They will there be demonstrating two autonomous robots: one that succumbs to the temptation to get revenge, and another – controlled by the moral logic they are engineering – that resists its vengeful “heart” and does no violence.
University of Connecticut alumnus Rick Mastracchio took a break from orbiting the globe on the International Space Station to deliver an address to students graduating from the university’s School of Engineering on Saturday.
With a large black UConn banner and UConn baseball cap floating behind him, Mastracchio hovered between two space suits and spun upside down several times during the pre-recorded address for the 400 graduates and a crowd of about 5,000 at the university.
“I could not be there with you on this big day, but being in space I was trying to figure out how to make this speech different than all the other commencement addresses that are given each year,” he said.
“And then I realized – I’m in a weightless environment. So maybe, I should give the speech in a different orientation.”
Mastracchio, 54, who is on an eight-month stint on the space station, then floated upside down, before spinning back to an upright position, bringing laughs and cheers from graduates and their families.
“I probably have the best job on and off the planet,” he said.
Kazem Kazerounian, dean of the engineering school, who set up the speech from space, said: “Many of us, faculty and students, were inspired to become engineers because of space exploration and this was a perfect way to bring more reality to our dreams.”
Mastracchio, who will return to Earth next week aboard a Russian spacecraft after completing his fourth trip into space, had a final message as he grabbed and put on the UConn baseball cap.
“Go Huskies,” he said, referring to the nickname for the school’s sports teams, as he spun upside down again