Behind the apartment’s curtain, a tough guy is boxing, throwing left and right hooks and jabs, and lunging forward, enough to make any passing criminal think twice before breaking in.
The image is nothing more than a projected shadow but one that a Japanese apartment management company hopes will help protect and reassure women living by themselves.
Still in the prototype stage, “Man on the Curtain” uses a smartphone connected to a projector to throw a moving shadow of a man doing various energetic activities onto a curtain.
Customers can choose from a dozen different scenarios that show their man boxing, doing karate and even swinging a baseball bat.
To mix things up a bit, the man can calm down and do more mundane things like get dressed, chill out with a guitar or even do some vacuuming around the flat.
The system was developed for security at buildings run by Leopalace21 Corp,, said Keiichi Nakamura, manager of the firm’s advertising department.
Queries from the public prompted the company to think bigger and consider offering it for sale. But some people have had doubts about how effective it might be, said Nakamura.
In particular, criminals might sooner or later work out that a “man behind the curtain” who spends his whole time shadow boxing, actually means a woman is alone inside.
“If projecting a shadow makes a woman an easy target by showing criminals there’s nobody home, that would put the cart before the horse,” he said.
“So we’d like to commercialize it once we add variety, such as releasing a new video every day.”
Reporting by Kwiyeon Ha; Writing by Elaine Lies; Editing by Robert Birsel
“Another morning, another bit of casual misogyny & abuse”, ABC journalist Leigh Sales lamented last week after receiving a tweet accusing her of “virtually” performing sexual acts on her 7.30 guests. Sales’s comments draw our attention back to the abuse routinely encountered by women, people of colour and LGBTQ people on social media. Indeed, such online encounters appear to be so routine for journalists such as Sales that they are a mundane occurrence.
Of course, the abuse of women and minority groups in high-profile positions is, sadly, not new.
In 2016, The Guardian analysed abusive comments posted on its articles. Of the “top ten” most abused journalists, eight were women. The other two were black men. Of the top ten least abused, all were men.
Women in the public sphere have also drawn on the hashtag #mencallmethings to highlight the abuse they receive from men for daring to contribute to public discourse or to occupy positions of power. This type of misogynist abuse is so tediously predictable that one researcher has even developed a “rapeglish” tool that automatically generates strings of abuse.
Experiencing sexual harassment and abuse online is hardly limited to journalists and public figures. Australian research has demonstrated that such experiences are routine for women and LGBTQ people. It also shows that cisgender, heterosexual men do experience abuse online. However, women and LGBTQ groups experience more sexualised abuse, with men much more likely to be the perpetrators of this abuse (but, of course, #notallmen).
While the occurrence of sexist harassment online is well documented, we less often consider what might be driving this behaviour.
Is the answer online?
The nature of online spaces is often held up as a causal factor in online sexism and misogyny. We see this through the claim that the anonymity online spaces afford allows this behaviour to occur. These men wouldn’t say these things to women in real life without the protection of anonymity.
While the internet certainly facilitates aspects of this behaviour, it doesn’t directly cause it. Anonymity might make it easier to engage in and get away with these actions.
Online cultures can work to support and reinforce sexist abuse – with perpetrators seeking out online communities that normalise and condone this behaviour. It’s often further reinforced by the lack of consequences from online platforms.
However, this doesn’t tell us why these perpetrators are targeting women and other marginalised groups. Likewise, these cultures of support also exist offline. While peer support is certainly important in explaining why sexual violence occurs, it is not unique to online spaces.
The claim that these men wouldn’t make such comments to women’s faces is also problematic. As my own research on street harassment shows, some men do make these types of abusive comments to women in person.
Gender, power and violence
There is little research that has asked perpetrators to account for why they engage in this behaviour. Journalist Ginger Gorman came to the conclusion in her investigatory work that trolls (those who perpetrate abuse online) are “narcissists”.
For some, trolling acts as an apparent source of “fun” or entertainment, though it is also much more than this. A recent study on the related practice of “revenge porn” or image-based sexual abuse found that perpetrators engaged in this behaviour to express power and control over an ex-partner. They used the non-consensual posting of images to reaffirm their sense of masculinity.
We can look to the research on violence against women and other forms of abuse more broadly to point to some likely causal explanations. Researchers have comprehensively demonstrated the ways in which sexist online abuse forms part of the continuum of sexual violence. As with all forms of sexual violence, we can understand the actions of perpetrators as situated within a mix of individual, social, cultural and structural causes.
Adherence to strict or rigid gender norms – that is, our ideas about what it means to be a “man” or a “woman” – is one such factor associated with perpetration of various forms of gender-based violence. Certainly, it is plausible that these norms underpin online abuse. Women in high-profile positions, such as Sales, could be seen as “stepping out of line” by challenging traditional gender norms.
This suggests that men’s online abuse of women is fundamentally about power and reasserting the dominance of a particular type of masculinity. As cyberhate researcher Dr Emma Jane explains, online abuse occurs:
Because men continue to hold a disproportionate share of the political, economic, and social power, some using various forms of violence to keep women in their place.
Online abuse occurs because of, as well as actively reinforcing and perpetuating, disparate gender (and other) power relations. It can be used in an attempt to silence and exclude women from public (online) discussion, and in an attempt to “reclaim” online spaces from women who have the temerity to engage in these spaces.
Online abuse may appear to be ostensibly different from rape or sexual assault. However, the same norms and power structures underpin these acts. It is to these we need to look in understanding and, ultimately, challenging and changing the actions of these men.
Concern about Facebook Inc’s (FB.O) respect for data privacy is widening to include the information it collects about non-users, after Chief Executive Mark Zuckerberg said the world’s largest social network tracks people whether they have accounts or not.
Privacy concerns have swamped Facebook since it acknowledged last month that information about millions of users wrongly ended up in the hands of political consultancy Cambridge Analytica, a firm that has counted U.S. President Donald Trump’s 2016 electoral campaign among its clients.
Zuckerberg said on Wednesday under questioning by U.S. Representative Ben Luján that, for security reasons, Facebook also collects “data of people who have not signed up for Facebook.”
Lawmakers and privacy advocates immediately protested the practice, with many saying Facebook needed to develop a way for non-users to find out what the company knows about them.
“We’ve got to fix that,” Representative Luján, a Democrat, told Zuckerberg, calling for such disclosure, a move that would have unclear effects on the company’s ability to target ads. Zuckerberg did not respond. On Friday Facebook said it had no plans to build such a tool.
Critics said that Zuckerberg has not said enough about the extent and use of the data. “It’s not clear what Facebook is doing with that information,” said Chris Calabrese, vice president for policy at the Center for Democracy & Technology, a Washington advocacy group.
COOKIES EVERYWHERE
Facebook gets some data on non-users from people on its network, such as when a user uploads email addresses of friends. Other information comes from “cookies,” small files stored via a browser and used by Facebook and others to track people on the internet, sometimes to target them with ads.
“This kind of data collection is fundamental to how the internet works,” Facebook said in a statement to Reuters.
Asked if people could opt out, Facebook added, “There are basic things you can do to limit the use of this information for advertising, like using browser or device settings to delete cookies. This would apply to other services beyond Facebook because, as mentioned, it is standard to how the internet works.”
Facebook often installs cookies on non-users’ browsers if they visit sites with Facebook “like” and “share” buttons, whether or not a person pushes a button. Facebook said it uses browsing data to create analytics reports, including about traffic to a site.
The company said it does not use the data to target ads, except those inviting people to join Facebook.
Advocates and lawmakers say they are singling out Facebook because of its size, rivaled outside China only by Alphabet Inc’s (GOOGL.O) Google, and because they allege Zuckerberg was not forthcoming about the extent and reasons for the tracking.
“He’s either deliberately misunderstanding some of the questions, or he’s not clear about what’s actually happening inside Facebook’s operation,” said Daniel Kahn Gillmor, a senior staff technologist at the American Civil Liberties Union.
Zuckerberg, for instance, said the collection was done for security purposes, without explaining further or saying whether it was also used for measurement or analytics, Gillmor said, adding that Facebook had a business incentive to use the non-user data to target ads.
Facebook declined to comment on why Zuckerberg referred to security only.
Gillmor said Facebook could build databases on non-users by combining web browsing history with uploaded contacts. Facebook said on Friday that it does not do so.
The ACLU is pushing U.S. lawmakers to enact broad privacy legislation including a requirement for consent prior to data collection.
The first regulatory challenge to Facebook’s practices for non-users may come next month when a new European Union law, known as the General Data Protection Regulation (GDPR), takes effect and requires notice and consent prior to data collection.
At a minimum, “Facebook is going to have to think about ways to structure their technology to give that proper notice,” said Woodrow Hartzog, a Northeastern University professor of law and computer science.
Facebook said in its statement on Friday, “Our products and services comply with applicable law and will comply with GDPR.”
The social network would be wise to recognize at least a right to know, said Michael Froomkin, a University of Miami law professor.
“If I’m not a Facebook user, I ought to have a right to know what data Facebook has about me,” Froomkin said.
You are a rookie law enforcement officer, onboard a helicopter heading into the main compound of Project at Eden’s Gate, a religious cult operating across a huge stretch of Montana. A towering statue of the militia’s leader, Joseph Seed, rises into the sky. With a warrant for the arrest of Seed, you navigate a warren of buildings patrolled by aggressive white men and their snapping dogs, before entering a white-boarded church. A haunting rendition of Amazing Grace plays in the background as you meet Seed for the first time, in an almost dream-like sequence. From there, you are transported to an intense face-off between militia extremists and federal officials.
This is what you would experience on playing the new Ubisoft video game Far Cry 5 (2018). Its story speaks to what seems a powerful political moment, of an American nation literally at war with itself.
While already a huge financial success (with reports of nearly five million copies sold in its first week of release), Ubisoft’s title has been widely criticised for its overt lack of political message. The Montreal-based company may have promoted its game as a serious take on religious and political radicalism, but so far journalists have labelled Far Cry 5 a title unwilling to squarely take aim at Trump’s America, or speak directly to matters of contemporary racism, endemic gun culture, or right-wing extremism. Instead, reviewers have called it “totally unconvincing” (PC Gamer), “a missed opportunity” (The Outline), and a game that ultimately “says pretty much nothing about” modern America (The Guardian).
Are we being too harsh on the game? After all, most entertainment companies hype their products. Equally, would a film or novel tackling religious cults be criticised for not engaging with the wider problems of Trump’s America? In my view, video games do not need to make blatant political statements to be considered art or satire, nor do they need strong messages to have impact. Ultimately, gamers make their own readings and experiences, without the need to be constantly “billboarded”.
The Last Supper
Far Cry 5 also still has a message; just more subtle, and yes, peripheral, than first imagined. The core image of the game is a digital recreation of the Last Supper, reminiscent of Leonardo da Vinci’s mural of the late 15th century. Ubisoft depicts Seed as a preacher at the centre of a long table, with open hands gesturing to his gathered disciples – all white, hardy and unkempt survivalists. The table features a mass of armaments from hunters knifes to bazookas. Seed uses the Stars and Stripes as his tablecloth.
It is a great picture: subversive and satirical, intriguing and ambiguous. It is true that the game play rarely reaches such iconographic heights, but it asserts the same sense of destabilisation and decay. The game has something to say if you listen.
While Far Cry 5 is set in contemporary Montana (and speaks to a recent rise in home bred extremism), its sense of conflict evokes an earlier period, specifically the mid-1990s, when militia groups resembling Seed’s seemed on the verge of having real impact on American society. Specifically, the game character of Seed closely resembles David Koresh, leader of the Branch Davidians, a religious cult whose members committed mass suicide during a federal-led siege at Waco, Texas, in 1993. Beginning with the Ruby Ridge siege of 1992, events climaxed in 1995, when Timothy McVeigh planted a bomb at the Alfred P Murrah federal building in Oklahoma City that ripped the structure apart, and killed 168 people.
Seeking to understand the ascendency of such radicalism, scholars discovered issues of rural impoverishment (linked with Reaganomics), isolation, and disenfranchisement. Transposing the mid-1990s to 2018, Far Cry 5 suggests we have something to learn from that difficult moment. It leaves questions for the player to ponder, such as at what point does disillusionment turn into rebellion, as well as highlighting the paradoxes of religious groups who worship their weaponry. As one rescued civilian puzzles: “For holy folks, they sure put a lot of faith in their guns.” The game leaves the player to decide the bigger lessons.
Doomsday
The image of Joseph Seed itself smacks of prophecy. Lead writer Drew Holmes explains: “We wanted to tell a story about a man who believes the end of the world is coming.”
Far Cry 5 is about one American who invites doomsday. Like most post-9/11 video games, Ubisoft’s title explores the dystopian theme of a nation falling apart, with the player, as hero, sent in as a loyal serviceman (in this case, a sheriff’s deputy) to raise the flag. Like many games, it is a decidedly cathartic, adrenaline-fuelled and redemptive campaign. The player actively saves small-town America from a lurking threat, and while action dominates the narrative, there is always a sense of righteousness and patriotic duty on display.
The game is also about hope. Far Cry 5 counterposes the natural beauty of Montana (introduced as “America the beautiful” – a land of grain silos, pick-up trucks, and the Jeffersonian agrarian idyll) against scenes of darkness, such as a dank bar where locals talk of unwelcome and ugly thugs taking over. The fight for Hope County, the fictional territory where the game takes place, is actually a fight over hope itself: the hope offered by a misled leader with vague talk of saving people, especially the disenchanted white, versus the truer hope offered by traditional American values and governance. Illusions to false messiahs and even a mission “Make Hope Great Again” to some degree satire Trump’s America.
But the real danger of Joseph Seed lies in the mystery of where he’s planning on taking his Americans. As heard on a radio at one survivalist’s bunker: “You are my children, and together, we will march to …” Then the transmission fails.
It is important that as players we interpret the clues, think for ourselves, and co-create the stories. Far Cry 5 offers an immersive and atmospheric digital America for us to explore. It’s a good game precisely because it shies from outright criticism of Trump’s America. After all, we already have that in spades in the real world.
US regulators Wednesday approved the first device that uses artificial intelligence to detect eye damage from diabetes, allowing regular doctors to diagnose the condition without interpreting any data or images.
Facebook chief Mark Zuckerberg was set for a fiery face-off on Capitol Hill Tuesday as he attempts to quell a firestorm over privacy and security lapses at the social network that have angered lawmakers and the site's two billion users.
ZUCKERBERG IS SCHEDULED TO TESTIFY AT 2:15 P.M. (ET). WATCH LIVE VIDEO, COURTESY OF PBS NEWSHOUR, BELOW:
Zuckerberg, making his first formal appearance at a Congressional hearing, will seek to allay widespread fears ignited by the leaking of private data on tens of millions of users to a British firm working on Donald Trump's 2016 campaign.
The scandal has sparked fresh talk about regulation of social media platforms, and Facebook in the past week has sought to stem criticism by endorsing at least one legislative proposal, which would require better labeling and disclosure on political advertising.
Senator Bill Nelson, one of the lawmakers who met privately Monday with Zuckerberg, said he believes the 33-year-old CEO is taking the matter seriously.
"I believe he understands that regulation could be right around the corner," the Florida Democrat said.
Other lawmakers were less clear about the need for new regulations.
Republican Senator John Kennedy of Louisiana said, "I'm not interested in regulating Facebook. I'm interested in Facebook regulating itself and solving the problems. I come in peace."
Zuckerberg was set to appear before a Senate panel from 1815 GMT, with another session in the House of Representatives Wednesday.
The huge social network has begun alerting some users about whether their data was leaked to the British firm Cambridge Analytica.
Notification is among several steps pledged by Facebook to fix pervasive problems on data security and manipulation of the platform used by some two billion people worldwide.
Senate Judiciary Chairman Charles Grassley said Tuesday's hearing is the first step in an "open dialogue about how we address growing consumer privacy concerns."
"The tech industry has a duty to respond to widespread and growing privacy concerns and restore the public trust. The status quo no longer works," Grassley added.
- Suit and tie -
On Monday, Zuckerberg ditched his trademark T-shirt for a somber dark suit and tie as he made the rounds on Capitol Hill and sounded contrite about Facebook's conduct.
"We didn't take a broad enough view of our responsibility, and that was a big mistake. It was my mistake, and I'm sorry," Zuckerberg said in his written testimony released by the House commerce committee.
"I started Facebook, I run it, and I'm responsible for what happens here."
In his written remarks, Zuckerberg called Facebook "an idealistic and optimistic company" and said: "We focused on all the good that connecting people can bring."
But he acknowledged that "it's clear now that we didn't do enough to prevent these tools from being used for harm as well. That goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy."
Zuckerberg said he has called for more investments in security that will "significantly impact our profitability going forward," adding: "I want to be clear about what our priority is: protecting our community is more important than maximizing our profit."
- 'Investigating every app' -
Zuckerberg recounted a list of steps announced by Facebook aimed at averting a repeat of the improper use of data by third parties like Cambridge Analytica, and noted that other applications were also being investigated to determine if they did anything wrong.
"We're in the process of investigating every app that had access to a large amount of information before we locked down our platform in 2014," said Zuckerberg.
"If we detect suspicious activity, we'll do a full forensic audit. And if we find that someone is improperly using data, we'll ban them and tell everyone affected."
Zuckerberg met behind closed doors with Senators Bill Nelson of Florida and Dianne Feinstein of California, among others.
If you use Google or Facebook, you may have wondered just how much of your personal data these big internet giants have access to. This is a good question to ask in our modern era of Big Data, constant connectivity and rapidly decreasing personal privacy. Some people, like Washington State Chief Privacy Officer Alex Alben, even argue that your personal data isn’t really “personal” at all. In other words, you may have unwittingly agreed to give your deepest information to third-party vendors through websites and apps simply by agreeing to their lengthy and frequently skimmed Terms of Service.
By the looks of it, Google seems to have some of the most invasive amounts of data on its users. This isn’t to say the company is using personal data on people for malicious and nefarious purposes. But the frequency, detail and amount it has amassed over the years are beginning to put people on edge. Let’s start off with location. If you have Google maps enabled (like many of us), your physical movements and the time you take to get from Point A to Point B, wherever that may be, has been logged into its search database. If you want to see proof of this activity, look at your Google timeline.
Then there’s your search history. Google maintains a database of your search entries as a way to learn more about you and your preferences. But if you fear that this constant logging of your personal search history is a dash too deep for your taste, you need to delete your search history from all the devices you own. That’s not all. Ads, too, factor into Google’s profiles of its users. To give you an example, Google has an advertisement profile on me; its algorithm asserts that I'm a female between the age of 25-34 and that I might like computers, hair care and politics. Google presents ads based on the personal information you give the website, including your age, gender, location, and other metrics. Plus, Google stores your YouTube search history and maintains a log of information on the apps you use. From the amount you spend on these apps to the people you talk to, Google stores that information in its database.
Suppose you’re not exactly excited about your digital footprint being so minutely tracked. When it comes to Google, you can do two things. For starters, you can download a copy of everything Google has on you through the Google takeout option. Depending on how often you’ve used it, the amount of information can range from kilobytes to gigabytes and more.
After that, and perhaps more importantly, you can opt out of the Google Analytics program. Google Analytics lets website owners see traffic, number of clicks, time spent on a page, and a lot more for their own analyses. You can refuse to be part of this data collection by using the Google Analytics opt-out add-on for your browser. These little steps can restore some element of privacy to your online activity.
Then there’s Facebook. Amid the Cambridge Analytica scandal, the social network giant is under massive fire from observers who say its practices on privacy are reprehensible. With many people joining the #DeleteFacebook sentiment, the company recently shared an update in its security settings, saying that access to it would be more readily available for users. But if you’re interested in knowing just how much Facebook has on you in terms of personal data, check out its download feature. Go to your general account settings and look for “Download a copy of your Facebook data” at the bottom of the options.
It might be slightly jarring to see just how much Facebook logs about its users. From personal conversations, phone numbers, apps, photos, videos, events, locations, and a whole lot more, Facebook’s data can be converted into tons of documents on individual users. I’ll give you my example. Since 2008, Facebook has 430.1 megabytes of personal data on me. To make sense of such a colossal amount, conversion to a Word document helps. Since one megabyte is almost 500 character-filled pages, that's about 215,050 pages of text on yours truly. To make matters less uncomfortable, that’s several novels.
While Facebook tries to figure out how to respond to growing concern over its privacy settings, you can do your (small) part in tightening your profile. You can opt out of Facebook’s API sharing feature so that third-party websites, games and applications don’t have access to your data.
All of this information on information is to say that when you decide to use a website or program, read about it attentively to see what you’re getting into. Most of the time, the most unnerving aspects about Google and Facebook are actually part of their openly stated business models. As a Google spokesperson told CNBC News, “In order to make the privacy choices that are right for them, it's essential that people can understand and control their Google data. Over the years, we've developed tools like My Account expressly for this purpose, and we'd encourage everyone to review it regularly.”
Atlanta’s top officials holed up in their offices on Saturday as they worked to restore critical systems knocked out by a nine-day-old cyber attack that plunged the Southeastern U.S. metropolis into technological chaos and forced some city workers to revert to paper.
On an Easter and Passover holiday weekend, city officials labored in preparation for the workweek to come.
Police and other public servants have spent the past week trying to piece together their digital work lives, recreating audit spreadsheets and conducting business on mobile phones in response to one of the most devastating “ransomware” virus attacks to hit an American city.
Three city council staffers have been sharing a single clunky personal laptop brought in after cyber extortionists attacked Atlanta’s computer network with a virus that scrambled data and still prevents access to critical systems.
“It’s extraordinarily frustrating,” said Councilman Howard Shook, whose office lost 16 years of digital records.
One compromised city computer seen by Reuters showed multiple corrupted documents with “weapologize” and “imsorry” added to file names.
Ransomware attacks have surged in recent years as cyber extortionists moved from attacking individual computers to large organizations, including businesses, healthcare organizations and government agencies. Previous high-profile attacks have shut down factories, prompted hospitals to turn away patients and forced local emergency dispatch systems to move to manual operations.
Ransomware typically corrupts data and does not steal it. The city of Atlanta has said it does not believe private residents’ information is in the hands of hackers, but they do not know for sure.
City officials have declined to discuss the extent of damage beyond disclosed outages that have shut down some services at municipal offices, including courts and the water department.
Nearly 6 million people live in the Atlanta metropolitan area. The Georgia city itself is home to more than 450,000 people, according to the latest data from the U.S. Census Bureau.
City officials told Reuters that police files and financial documents were rendered inaccessible by unknown hackers who demanded $51,000 worth of bitcoin to provide digital keys to unlock scrambled files.
“Everything on my hard drive is gone,” City Auditor Amanda Noble said in her office housed in Atlanta City Hall’s ornate tower.
City officials have not disclosed the extent to which servers for backing up information on PCs were corrupted or what kind of information they think is unrecoverable without paying the ransom.
Noble discovered the disarray on March 22 when she turned on her computer to discover that files could not be opened after being encrypted by a powerful computer virus known as SamSam that renamed them with gibberish.
“I said, ‘This is wrong,’” she recalled.
City officials then quickly entered her office and told her to shut down the computer before warning the rest of the building.
Noble is working on a personal laptop and using her smartphone to search for details of current projects mentioned in emails stored on that device.
Not all computers were compromised. Ten of 18 machines in the auditing office were not affected, Noble said.
OLD-SCHOOL ANALOG
Atlanta police returned to taking written case notes and have lost access to some investigative databases, department spokesman Carlos Campos told Reuters. He declined to discuss the contents of the affected files.
“Our data management teams are working diligently to restore normal operations and functionalities to these systems and hope to be back online in the very near future,” he said. By the weekend, he added, officers were returning to digital police reports.
Meanwhile, some city employees complained they have been left in the dark, unsure when it is safe to turn on their computers.
“We don’t know anything,” said one frustrated employee as she left for a lunch break on Friday.
FEEBLE
Like City Hall, whose 1930 neo-Gothic structure is attached to a massive modern wing, the city’s computer system is a combination of old and new.
“One of the reasons why municipalities are vulnerable is we just have so many different systems,” Noble said.
The city published results from a recent cyber-security audit in January, and had started implementing its recommendations before the ransomware virus hit. The audit called for better record-keeping and hiring more technology workers.
Councilman Shook said he is worried about how much the recovery will cost the city, but that he supports funding a cyber-security overhaul to counter future attacks.
For now his staff are temporarily sharing one aging laptop.
“Things are very slow,” he said. “It was a very surreal experience to be shut down like that.”
Mayor Keisha Lance Bottoms, who took office in January, has declined to say if the city paid the ransom ahead of a March 28 deadline mentioned in an extortion note whose image was released by a local television station.
Shook, who chairs the city council’s finance subcommittee, said he did not know whether the city is negotiating with the hackers, but that it appears no ransom has been paid to date.
The Federal Bureau of Investigation, which is helping Atlanta respond, typically discourages ransomware victims from paying up.
FBI officials could not immediately be reached for comment. A Department of Homeland Security spokesman confirmed the agency is helping Atlanta respond to the attack, but declined to comment further.
Hackers typically walk away when ransoms are not paid, said Mark Weatherford, a former senior DHS cyber official.
Weatherford, who previously served as California’s chief information security officer, said the situation might have been resolved with little pain if the city had quickly made that payment.
“The longer it goes, the worse it gets,” he said. “This could turn out to be really bad if they never get their data back.” (Reporting by Laila Kearney; additional reporting by Jim Finkle; editing by Daniel Bases and Jonathan Oatis)
Friends, family and colleagues of British scientist Stephen Hawking will gather Saturday to pay their respects at his private funeral in Cambridge, where he spent most of his extraordinary life. Hawking, who died on March 14 at the age of 76, was famously an atheist but his children Lucy, Robert and Tim have chosen the town's university church, St Mary the Great, to say their farewell.
In an email to me, Cambridge University scholar Aleksandr Kogan explained how his statistical model processed Facebook data for Cambridge Analytica. The accuracy he claims suggests it works about as well as established voter-targeting methods based on demographics like race, age and gender.
Regarding one key public concern, though, Kogan’s numbers suggest that information on users’ personalities or “psychographics” was just a modest part of how the model targeted citizens. It was not a personality model strictly speaking, but rather one that boiled down demographics, social influences, personality and everything else into a big correlated lump. This soak-up-all-the-correlation-and-call-it-personality approach seems to have created a valuable campaign tool, even if the product being sold wasn’t quite as it was billed.
But a key question has remained unanswered: Was Cambridge Analytica really able to effectively target campaign messages to citizens based on their personality characteristics – or even their “inner demons,” as a company whistleblower alleged?
Part of my own research focuses on understanding machine learning methods, and my forthcoming book discusses how digital firms use recommendation models to build audiences. I had a hunch about how Kogan and Chancellor’s model worked.
“So, for instance, a category might represent action movies, with movies with a lot of action at the top, and slow movies at the bottom, and correspondingly users who like action movies at the top, and those who prefer slow movies at the bottom.”
Factors are artificial categories, which are not always like the kind of categories humans would come up with. The most important factor in Funk’s early Netflix model was defined by users who loved films like “Pearl Harbor” and “The Wedding Planner” while also hating movies like “Lost in Translation” or “Eternal Sunshine of the Spotless Mind.” His model showed how machine learning can find correlations among groups of people, and groups of movies, that humans themselves would never spot.
Funk’s general approach used the 50 or 100 most important factors for both users and movies to make a decent guess at how every user would rate every movie. This method, often called dimensionality reduction or matrix factorization, was not new. Political science researchers had shown that similar techniques using roll-call vote data could predict the votes of members of Congress with 90 percent accuracy. In psychology the “Big Five” model had also been used to predict behavior by clustering together personality questions that tended to be answered similarly.
Still, Funk’s model was a big advance: It allowed the technique to work well with huge data sets, even those with lots of missing data – like the Netflix dataset, where a typical user rated only few dozen films out of the thousands in the company’s library. More than a decade after the Netflix Prize contest ended, SVD-based methods, or related models for implicit data, are still the tool of choice for many websites to predict what users will read, watch, or buy.
These models can predict other things, too.
Facebook knows if you are a Republican
In 2013, Cambridge University researchers Michal Kosinski, David Stillwell and Thore Graepel published an article on the predictive power of Facebook data, using information gathered through an online personality test. Their initial analysis was nearly identical to that used on the Netflix Prize, using SVD to categorize both users and things they “liked” into the top 100 factors.
The paper showed that a factor model made with users’ Facebook “likes” alone was 95 percent accurate at distinguishing between black and white respondents, 93 percent accurate at distinguishing men from women, and 88 percent accurate at distinguishing people who identified as gay men from men who identified as straight. It could even correctly distinguish Republicans from Democrats 85 percent of the time. It was also useful, though not as accurate, for predicting users’ scores on the “Big Five” personality test.
Kogan and Chancellor, also Cambridge University researchers at the time, were starting to use Facebook data for election targeting as part of a collaboration with Cambridge Analytica’s parent firm SCL. Kogan invited Kosinski and Stillwell to join his project, but it didn’t work out. Kosinski reportedly suspected Kogan and Chancellor might have reverse-engineered the Facebook “likes” model for Cambridge Analytica. Kogan denied this, saying his project “built all our models using our own data, collected using our own software.”
What did Kogan and Chancellor actually do?
As I followed the developments in the story, it became clear Kogan and Chancellor had indeed collected plenty of their own data through the thisisyourdigitallife app. They certainly could have built a predictive SVD model like that featured in Kosinski and Stillwell’s published research.
So I emailed Kogan to ask if that was what he had done. Somewhat to my surprise, he wrote back.
“We didn’t exactly use SVD,” he wrote, noting that SVD can struggle when some users have many more “likes” than others. Instead, Kogan explained, “The technique was something we actually developed ourselves … It’s not something that is in the public domain.” Without going into details, Kogan described their method as “a multi-step co-occurrence approach.”
However, his message went on to confirm that his approach was indeed similar to SVD or other matrix factorization methods, like in the Netflix Prize competition, and the Kosinki-Stillwell-Graepel Facebook model. Dimensionality reduction of Facebook data was the core of his model.
How accurate was it?
Kogan suggested the exact model used doesn’t matter much, though – what matters is the accuracy of its predictions. According to Kogan, the “correlation between predicted and actual scores … was around [30 percent] for all the personality dimensions.” By comparison, a person’s previous Big Five scores are about 70 to 80 percent accurate in predicting their scores when they retake the test.
Kogan’s accuracy claims cannot be independently verified, of course. And anyone in the midst of such a high-profile scandal might have incentive to understate his or her contribution. In his appearance on CNN, Kogan explained to a increasingly incredulous Anderson Cooper that, in fact, the models had actually not worked very well.
In fact, the accuracy Kogan claims seems a bit low, but plausible. Kosinski, Stillwell and Graepel reported comparable or slightly better results, as have several other academic studies using digital footprints to predict personality (though some of those studies had more data than just Facebook “likes”). It is surprising that Kogan and Chancellor would go to the trouble of designing their own proprietary model if off-the-shelf solutions would seem to be just as accurate.
Importantly, though, the model’s accuracy on personality scores allows comparisons of Kogan’s results with other research. Published models with equivalent accuracy in predicting personality are all much more accurate at guessing demographics and political variables.
For instance, the similar Kosinski-Stillwell-Graepel SVD model was 85 percent accurate in guessing party affiliation, even without using any profile information other than likes. Kogan’s model had similar or better accuracy. Adding even a small amount of information about friends or users’ demographics would likely boost this accuracy above 90 percent. Guesses about gender, race, sexual orientation and other characteristics would probably be more than 90 percent accurate too.
Critically, these guesses would be especially good for the most active Facebook users – the people the model was primarily used to target. Users with less activity to analyze are likely not on Facebook much anyway.
When psychographics is mostly demographics
Knowing how the model is built helps explain Cambridge Analytica’s apparently contradictory statements about the role – or lack thereof – that personality profiling and psychographics played in its modeling. They’re all technically consistent with what Kogan describes.
A model like Kogan’s would give estimates for every variable available on any group of users. That means it would automatically estimate the Big Five personality scores for every voter. But these personality scores are the output of the model, not the input. All the model knows is that certain Facebook likes, and certain users, tend to be grouped together.
With this model, Cambridge Analytica could say that it was identifying people with low openness to experience and high neuroticism. But the same model, with the exact same predictions for every user, could just as accurately claim to be identifying less educated older Republican men.
The whole point of a dimension reduction model is to mathematically represent the data in simpler form. It’s as if Cambridge Analytica took a very high-resolution photograph, resized it to be smaller, and then deleted the original. The photo still exists – and as long as Cambridge Analytica’s models exist, the data effectively does too.
A Facebook Inc executive said in an internal memo in 2016 that the social media company needed to pursue adding users above all else, BuzzFeed News reported on Thursday, prompting disavowals from the executive and Facebook Chief Executive Officer Mark Zuckerberg.
The memo from Andrew Bosworth, a Facebook vice president, had not been previously reported as Facebook faces inquiries over how it handles personal information and the tactics the social media company has used to grow to 2.1 billion users.
Zuckerberg stood by Bosworth, who goes by the nickname “Boz,” while distancing himself from the memo’s contents. Bosworth confirmed the memo’s authenticity but in a statement he disavowed its message, saying its goal had been to encourage debate.
Facebook users, advertisers and investors have been in an uproar for months over a series of scandals, most recently privacy practices that allowed political consultancy Cambridge Analytica to obtain personal information on 50 million Facebook members. Zuckerberg is expected to testify at a hearing with U.S. lawmakers as soon as April.
“Boz is a talented leader who says many provocative things. This was one that most people at Facebook including myself disagreed with strongly. We’ve never believed the ends justify the means,” Zuckerberg said in a statement.
Bosworth wrote in the June 2016 memo that some “questionable” practices were all right if the result was connecting people.
“That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends,” he wrote in the memo, which BuzzFeed published on its website.
He also urged fellow employees not to let potential negatives slow them down.
“Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people,” he wrote.
Bosworth said Thursday that he did not agree with the post today “and I didn’t agree with it even when I wrote it.
“Having a debate around hard topics like these is a critical part of our process and to do that effectively we have to be able to consider even bad ideas, if only to eliminate them,” Bosworth’s statement said.
(Reporting by David Ingram; editing by Grant McCool)
Facebook’s vice president made the shocking admission that people could be killed by terrorists using their social network. In a memo that was released June 18, 2016, Andrew “Boz” Bosworth put in writing that their corporate growth depends on data harvesting like what Cambridge Analytica was doing.
“We connect people. Period. That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it,” he wrote in the memo, acquired by BuzzFeed.
In another section of the memo he confessed the tool is used to "connect more people." As such, it "can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.”
The subject of the memo was titled "The Ugly" and hasn't been released outside of digital circles.
In a statement posted on Twitter, Bosworth said that he didn't agree with his assessment less than two years later. "I didn't agree with it even when I wrote it," he then wrote.
Facebook on Wednesday launched a fresh effort to quell the firestorm over the hijacking of personal data, once again unveiling new privacy tools and settings to give users more control over how their information is shared.
The new features follow fierce criticism of the social network giant after it was revealed that the personal data of tens of millions of users was harvested by a British firm linked to Donald Trump's 2016 presidential campaign.
The company acknowledged that it needed to "do more to keep people informed," but said the changes have been "in the works for some time."
"We've heard loud and clear that privacy settings and other important tools are too hard to find," chief privacy officer Erin Egan and deputy general counsel Ashlie Beringer said in a blog post.
"We're taking additional steps in the coming weeks to put people more in control of their privacy."
The updates include easier access to Facebook's user settings and tools to easily search for, download and delete personal data stored on the site used by two billion people.
Facebook said a new privacy shortcuts menu will allow users to quickly increase account security, manage who can see their information and activity on the site, and control advertisements they see.
Facebook's terms of service and data policy are being updated to improve transparency about how the site collects and uses information, according to Beringer and Egan.
The social network said it is also shutting down 'Partner Categories,' a feature which enables more precise targeting of ads by combining information from Facebook with data aggregated by outside companies such as Experian and Acxiom.
"This product enables third-party data providers to offer their targeting directly on Facebook," product marketing director Graham Mudd said in a statement posted online.
"While this is common industry practice, we believe this step, winding down over the next six months, will help improve people's privacy on Facebook."
Earlier this month, whistleblower Christopher Wylie revealed political consulting company Cambridge Analytica had obtained profiles on 50 million Facebook users via an academic researcher's personality prediction app.
The app was downloaded by 270,000 people, but also scooped up their friends' data without consent -- as was possible under Facebook's rules at the time.
- Lukewarm praise -
Yet some analysts said Facebook and its chief Mark Zuckerberg have made similar promises in the past.
"Zuck promised easier, better privacy controls 'in the coming weeks' eight years ago," Zeynep Tufekci, a University of North Carolina professor who studies social media, said on Twitter.
"The solution isn't shifting the burden to the user because the problem is the negative externalities of the business model."
Jennifer Grygiel, a Syracuse University professor of communications, said the new privacy settings and tools "are so obviously important to users that one has to wonder why this wasn't already done."
She said Facebook has "some of the best talent in the industry" and that "their old interface was not a mistake, it was by design."
Dylan Gilbert of the consumer group Public Knowledge said Facebook's moves "are welcome steps forward" but "do little to remedy a larger systemic problem."
"Online platforms currently lack meaningful legal incentives to protect users before their privacy is violated," Gilbert said in a statement.
"Facebook similarly lacks business incentives to engage in responsible data collection because disgruntled advertisers don't have anywhere comparable to go."
- Deepening tech crisis -
Facebook's move comes as authorities around the globe investigate how the social network handles and shares private data, and after its shares have tumbled more than 15 percent, wiping out tens of billions in market value.
The crisis also threatens the Silicon Valley tech industry whose business model revolves around data collected on internet users.
The US Federal Trade Commission this week said it had launched a probe into whether Facebook violated consumer protection laws or a 2011 court-approved agreement on protecting private user data.
US lawmakers are trying to get Zuckerberg to come to Washington to testify on the matter.
Authorities in Britain have meanwhile seized data from Cambridge Analytica in their investigation, and EU officials have warned of consequences for Facebook.
Facebook has apologized and vowed to fix the problem.
On Wednesday, six consumer and privacy organizations called upon Facebook to cease all campaign contributions and election activity until they ensure the integrity of all apps on the platform.
"A company whose platform is self-admittedly powerful enough to influence elections, must stay out of them," said a letter from the groups including Consumer Watchdog, Electronic Privacy Information Center and the Center for Digital Democracy.