Uber has rolled out its self-driving car fleet in its hometown of San Francisco, despite lacking the proper permit that state regulators say is required.
Starting Wednesday, riders who request an UberX, one of the company's budget ride options, may be matched with a self-driving Uber. It is unclear how many of these cars Uber has in San Francisco.
Launching the program kicked off a battle with the California Department of Motor Vehicles, which said on Tuesday that Uber does not have a permit to test autonomous vehicles on California roads, and demanded the company follow the permitting process that is in place.
"Twenty manufacturers have already obtained permits to test hundreds of cars on California roads. Uber shall do the same," the agency said in a written statement.
Uber's self-driving cars have been seen around San Francisco since at least September.
Uber argues that its cars are not able to drive without a person monitoring them - a driver and an engineer are in the front seats to take over frequently in situations such as a construction zone or pedestrian crossing - so the California law does not apply. California defines autonomous vehicles as cars that have the "capability" to drive "without the active physical control or monitoring of a natural person."
"All of our vehicles are compliant with applicable federal and state laws," a spokeswoman said.
In a company blog post, Uber called on California to take a more "pro-technology" approach to regulating autonomous cars.
"Several cities and states have recognized that complex rules and requirements could have the unintended consequence of slowing innovation," Uber said. "Our hope is that California, our home state and a leader in much of the world's dynamism, will take a similar view."
Uber said the San Francisco program will mimic its pilot in Pittsburgh, Pennsylvania, where three months ago Uber unveiled its secretive work on autonomous cars for the first time to the public. The company started with just four self-driving cars available to Pittsburgh passengers, although it had a fleet of more than a dozen for testing.
At that time, engineers at Uber's Advanced Technologies Center in Pittsburgh, where much of the company's research on autonomous cars takes place, emphasized that Uber was not attempting to build a driver assistance system. Rather, Uber had its sights on building fully autonomous cars, with no driver intervention.
Uber's San Francisco fleet features the Volvo XC90, an upgraded model from the Ford Fusions that were unveiled in Pittsburgh.
Protecting individual privacy from government intrusion is older than American democracy. In 1604, the attorney general of England, Sir Edward Coke, ruled that a man’s house is his castle. This was the official declaration that a homeowner could protect himself and his privacy from the king’s agents. That lesson carried into today’s America, thanks to our Founding Fathers’ abhorrence for imperialist Great Britain’s unwarranted search and seizure of personal documents.
They understood that everyone has something to hide, because human dignity and intimacy don’t exist if we can’t keep our thoughts and actions private. As citizens in the digital age, that is much more difficult. Malicious hackers and governments can monitor the most private communications, browsing habits and other data breadcrumbs of anyone who owns a smartphone, tablet, laptop or personal computer.
As an ethical hacker, my job is to help protect those who are unable, or lack the knowledge, to help themselves. People who think like hackers have some really good ideas about how to protect digital privacy during turbulent times. Here’s what they – and I – advise, and why. I have no affiliation or relationship with any of the companies listed below, except in some cases as a regular user.
Phone calls, text messaging and email
When you’re communicating with people, you probably want to be sure only you and they can read what’s being said. That means you need what is called “end-to-end encryption,” in which your message is transmitted as encoded text. As it passes through intermediate systems, like an email network or a cellphone company’s computers, all they can see is the encrypted message. When it arrives at its destination, that person’s phone or computer decrypts the message for reading only by its intended recipient.
For phone calls and private text-message-like communication, the best apps on the market are WhatsApp and Signal. Both use end-to-end encryption, and are free apps available for iOS and Android. In order for the encryption to work, both parties need to use the same app.
For private email, Tutanota and ProtonMail lead the pack in my opinion. Both of these Gmail-style email services use end-to-end encryption, and store only encrypted messages on their servers. Keep in mind that if you send emails to people not using a secure service, the emails may not be encrypted. At present, neither service supports PGP/GPG encryption, which could allow security to extend to other email services, but they are reportedly working on it. Both services are also free and based in countries with strong privacy laws (Germany and Switzerland). Both can be used on PCs and mobile devices. My biggest gripe is that neither yet offers two-factor authentication for additional login security.
Avoiding being tracked
It is less straightforward to privately browse the internet or use internet-connected apps and programs. Internet sites and services are complicated business, often involving loading information from many different online sources. For example, a news site might serve the text of the article from one computer, photos from another, related video from a third. And it would connect with Facebook and Twitter to allow readers to share articles and comment on them. Advertising and other services also get involved, allowing site owners to track how much time users spend on the site (among other data).
The easiest way to protect your privacy without totally changing your surfing experience is to install a small piece of free software called a “browser extension.” These add functionality to your existing web browsing program, such as Chrome, Firefox or Safari. The two privacy browser extensions that I recommend are uBlock Origin and Privacy Badger. Both are free, work with the most common web browsers and block sites from tracking your visits.
Encrypting all your online activity
If you want to be more secure, you need to ensure people can’t directly watch the internet traffic from your phone or computer. That’s where a virtual private network (VPN) can help. Simply put, a VPN is a collection of networked computers through which you send your internet traffic.
Instead of the normal online activity of your computer directly contacting a website with open communication, your computer creates an encrypted connection with another computer somewhere else (even in another country). That computer sends out the request on your behalf. When it receives a response – the webpage you’ve asked to load – it encrypts the information and sends it back to your computer, where it’s displayed. This all happens in milliseconds, so in most cases it’s not noticeably slower than regular browsing – and is far more secure.
For the simplest approach to private web browsing, I recommend Freedome by F-Secure because it’s only a few dollars a month, incredibly easy to use and works on computers and mobile devices. There are other VPN services out there, but they are much more complicated and would probably confuse your less technically inclined family members.
Additional tips and tricks
If you don’t want anyone to know what information you’re searching for online, use DuckDuckGo or F-Secure Safe Search. DuckDuckGo is a search engine that doesn’t profile its users or record their search queries. F-Secure Safe Search is not as privacy-friendly because it’s a collaborative effort with Google, but it provides a safety rating for each search result, making it a suitable search engine for children.
To add security to your email, social media and other online accounts, enable what is called “two-factor authentication,” or “2FA.” This requires not only a user name and password, but also another piece of information – like a numeric code sent to your phone – before allowing you to log in successfully. Most common services, like Google and Facebook, now support 2FA. Use it.
Encrypt the data on your phone and your computer to protect your files, pictures and other media. Both Apple iOS and Android have settings options to encrypt your mobile device.
And the last line of privacy defense is you. Only give out your personal information if it is necessary. When signing up for accounts online, do not use your primary email address or real phone number. Instead, create a throw-away email address and get a Google Voice number. That way, when the vendor gets hacked, your real data aren’t breached.
Germany's domestic intelligence agency on Thursday said it had seen a striking increase in Russian propaganda and disinformation campaigns aimed at destabilizing German society, and targeted cyber attacks against political parties.
"We see aggressive and increased cyber spying and cyber operations that could potentially endanger German government officials, members of parliament and employees of democratic parties," Hans-Georg Maassen, head of the domestic BfV intelligence agency, said in statement.
Maassen, who raised similar concerns about Russian efforts to interfere in German elections in an interview with Reuters last month, cited what he called increasing evidence about such efforts and said further cyber attacks were expected.
The agency said it had seen a wide variety of Russian propaganda tools and "enormous use of financial resources" to carry out "disinformation" campaigns aimed at the Russian-speaking community in Germany, political movements, parties and other decision makers.
The goal of the effort was to spread uncertainty in society,"to weaken or destabilize the Federal Republic of Germany," and to strengthen extremist groups and parties, complicate the work of the federal government and influence political dialogue.
The agency said it had seen a "striking increase" in spea-phishing attacks attributed to a Russian hacking group APT 28, also known as "Fancy Bear" or Strontrium, the same group blamed for the hack of the U.S. Democratic National Committee this year and a cyber attack on the German parliament in 2015.
The attacks were directed against German parties and members of parliament, the agency said, adding they were carried out by government bodies posing as "hacktivists".
"Propaganda and disinformation, cyber attacks, cyber espionage and cyber sabotage are part of the hybrid threat facing western democracies," Maassen said.
German officials have accused Moscow of trying to manipulate German media to fan popular angst over issues like the migrant crisis, weaken voter trust and breed dissent within the European Union so that it drops sanctions against Moscow.
But intelligence officials have stepped up their warnings in recent weeks, alarmed about the number of attacks.
Last month, German Chancellor Angela Merkel said she could not rule out Russia interfering in Germany's 2017 election through Internet attacks and misinformation campaigns.
Russian officials have denied all accusations of manipulation and interference intended to weaken the European Union or to affect the U.S. presidential election.
U.S. intelligence officials had warned in the run-up to the Nov. 8 presidential election of efforts to undermine the credibility of the vote that they believed were backed by the Russian government.
(Reporting by Andrea Shalal and Sabine Siebold; Editing by Janet Lawrence)
Of course, there is nothing new about fake news as such – the satirical site “The Onion” has long done this. Fake news satire is part of “Saturday Night Live”‘s Weekend Update and “The Daily Show.”
In these cases, the framework of humor is clear and explicit. That, however, is not the case in social media, which has emerged as a real news source. Pew Research Center reports that Facebook is “the most popular social media platform” and that “a majority of U.S. adults – 62 percent – get news on social media.” When people read fake news on social media, they may be tricked into thinking they are reading real news.
Both Google and Facebook have promised to take measures to address the concerns of fake news masquerading as real news. A team of college students has already developed a browser plug-in called FiB to help readers identify on Facebook what is fake and what is real.
But these steps don’t go far enough to address fake news.
The question then is: Can we better prepare ourselves to challenge and reject fabrications that may easily circulate as untruthful texts and images in the online world?
As scholars of library and information science, we argue that in today’s complex world, traditional literacy, with its emphasis on reading and writing, and information literacy – the ability to search and retrieve information – are not enough.
What we need today is metaliteracy – an ability to make sense of the vast amounts of information in the connected world of social media.
Why digital literacy is not enough
Students today are consumers of the latest technology gadgets and social media platforms. However, they don’t always have a deep understanding of the information transmitted through these devices, or how to be creators of online content.
Researchers at Stanford University recently found that “when it comes to evaluating information that flows through social media channels,” today’s “digital natives,” despite being immersed in these environments, “are easily duped” by misinformation.
They said they “were taken aback by students’ lack of preparation” and argued that educators and policymakers must “demonstrate the link between digital literacy and citizenship.”
The truth is that we live in a world where information lacks traditional editorial mechanisms of filter. It also comes in various styles and forms – it could range from digital images to multimedia to blogs and wikis. The veracity of all this information is not easily understood.
This problem has been around for a while. In 2005, for example, a false story about a political figure, John Seigenthaler Sr., was posted by an anonymous author on Wikipedia, implicating him in the assassinations of President John F. Kennedy and Bobby Kennedy. Seigenthaler challenged this fake entry and it was eventually corrected. Several other hoaxes have circulated on Wikipedia over the years, showing how easy it is to post false information online.
Indeed, in 2007, FactCheck.org, a website that monitors the accuracy of what is said by major U.S. political players, urged readers to ask critical questions in response to a false story that had been placed about House Democratic leader Nancy Pelosi. At the time, people were being misled into believing that Pelosi was proposing a tax on retirement funds and others to help illegal immigrants and minorities.
As we see it, metaliteracy is a way to achieve these goals.
So, what is metaliteracy?
Digital literacy supports the effective use of digital technologies, while metaliteracy emphasizes how we think about things. Metaliterate individuals learn to reflect on how they process information based on their feelings or beliefs.
To do that, first and foremost, metaliterates learn to question sources of information. For example, metaliterate individuals learn to carefully differentiate among multiple sites, both formal (such as The New York Times or Associated Press) and informal (a blog post or tweet).
Metaliterates learn to question the sources of information.
They question the validity of information from any of these sources and do not privilege one over the other. Information presented on a formal TV news source, such as CNN or Fox News, for instance, may be just as inaccurate as someone’s blog post. This involves understanding all sources of information.
Second, metaliterates learn to observe their feelings when reading a news item.
We are less inclined to delve further when something affirms our beliefs. On the other hand, we are more inclined to fact check or examine the source of the news when we don’t agree with it. Thinking about our own thinking reminds us that we need to move beyond how we feel, and engage our cognitive faculties in doing a critical assessment.
Metaliterates pause to think whether they believe something because it affirms their ideas.
Metaliteracy challenges assumptions
Metaliteracy helps us understand the context from which the news is arising, noting whether the information emanates from research or editorial commentary, distinguishing the value of formal and informal news sources and evaluating comments left by others.
By reflecting on the way we are thinking about a news story, for instance, we will be more apt to challenge our assumptions, ask good questions about what we are reading and actively seek additional information.
Consider the recent example of how fake news was put out through a single tweet and believed by thousands of readers online. Eric Tucker, a 35-year-old cofounder of a marketing company in Austin, Texas, tweeted that anti-Trump protesters were professionally organized and bused to Trump rallies. Despite having only 40 Twitter followers, this one individual managed to start a conspiracy theory. Thousands of people believed and forwarded the tweet.
This example shows how easy it is to transmit information online to a wide audience, even if it is not accurate. The combination of word and image in this case was powerful and supported what many people already believed to be true. But it also showed a failure to ask critical questions within an online community with shared ideas or to challenge one’s own beliefs with careful reflection.
In other words, just because information is shared widely on social media, that does not mean it is true.
Developing deeper understanding
Another emphasis of metaliteracy is understanding how information is packaged and delivered.
Packaging can be examined on a number of fronts. One is the medium used – is it text, photograph, video, cartoon, illustration or artwork? The other is how it is used – is the medium designed to appeal to our feelings? Does professional-looking design provide a level of credibility to the unsuspecting viewer?
Metaliterates learn how to discriminate between fake and real news.
Social media makes it easy to produce and distribute all kinds of digital content. We can all be photographers or digital storytellers using online tools for producing and packaging well-designed materials. This can be empowering.
But the same material can be used to create intentionally false messages with appealing design features. Metaliterates learn to distinguish between formal and informal sources of information that may have very different or nonexistent editorial checks and balances.
They learn to examine the packaging of content. They learn to recognize whether the seemingly professional design may be a façade for a bias or misinformation. Realnewsrightnow, for example, is a slickly designed site with attention-grabbing but often false headlines. The About page of the website might raise questions, but only if a reader’s mindset is evaluative.
Becoming a responsible citizen
Because social media is interactive and collaborative, the metaliterate learner must know how to contribute responsibly as well.
Metaliterate individuals recognize there are ethical considerations involved when sharing information, such as the information must be accurate. But there is more. Metaliteracy asks that individuals understand on a mental and emotional level the potential impact of one’s participation.
So, metaliterate individuals don’t just post random thoughts that are not based in truth. They learn that in a public space they have a responsibility to be fair and accurate.
So how can we become metaliterate?
Schools need to urge students to ponder these questions. Students need to be made aware of these issues early on so that they learn how not to develop uncritical assumptions and actions as they use technology.
They need to understand that whether they are posting a tweet, blog, Facebook post or writing a response to others online, they need to think carefully about what they are saying.
While social media offers much promise for providing everyone with a voice, there is a disturbing downside to this revolution. It has enabled sharing of misinformation and false news stories that radically alter representations of reality.
The public gets a lot of its news and information from Facebook. Some of it is fake. That presents a problem for the site’s users, and for the company itself.
Facebook cofounder and chairman Mark Zuckerberg said the company will find ways to address the problem, though he didn’t acknowledge its severity. And without apparent irony, he made this announcement in a Facebook post surrounded – at least for some viewers – by fake news items.
Other technology-first companies with similar power over how the public informs itself, such as Google, have worked hard over the years to demote low-quality information in their search results. But Facebook has not made similar moves to help users.
What could Facebook do to meet its social obligation to sort fact from fiction for the 70 percent of internet users who access Facebook? If the site is increasingly where people are getting their news, what could the company do without taking up the mantle of being a final arbiter of truth? My work as a professor of information studies suggests there are at least three options.
Facebook’s role
Facebook says it is a technology company, not a media company. The company’s primary motive is profit, rather than a loftier goal like producing high-quality information to help the public act knowledgeably in the world.
Nevertheless, posts on the site, and the surrounding conversations both online and off, are increasingly involved with our public discourse and the nation’s political agenda. As a result, the corporation has a social obligation to use its technology to advance the common good.
Discerning truth from falsehood, however, can be daunting. Facebook is not alone in raising concerns about its ability – and that of other tech companies – to judge the quality of news. The director of FactCheck.org, a nonprofit fact-checking group based at the University of Pennsylvania, told Bloomberg News that many claims and stories aren’t entirely false. Many have kernels of truth, even if they are very misleadingly phrased. So what can Facebook really do?
Option 1: Nudging
One option Facebook could adopt involves using existing lists identifying prescreened reliable and fake-news sites. The site could then alert those who want to share a troublesome article that its source is questionable.
One developer, for example, has created an extension to the Chrome browser that indicates when a website you’re looking at might be fake. (He calls it the “B.S. Detector.”) In a 36-hour hackathon, a group of college students created a similar Chrome browser extension that indicates whether the website the article comes from is on a list of verified reliable sites, or is instead unverified.
These extensions present their alerts while people are scrolling through their newsfeeds. At present, neither of these works directly as part of Facebook. Integrating them would provide a more seamless experience, and would make the service available to all Facebook users, beyond just those who installed one of the extensions on their own computer.
The company could also use the information the extensions generate – or their source material – to warn users before they share unreliable information. In the world of software design, this is known as a “nudge.” The warning system monitors user behavior and notifies people or gives them some feedback to help alter their actions when using the software.
This has been done before, for other purposes. For example, colleagues of mine here at Syracuse University built a nudging application that monitors what Facebook users are writing in a new post. It pops up a notification if the content they are writing is something they might regret, such as an angry message with swear words.
The beauty of nudges is the gentle but effective way they remind people about behavior to help them then change that behavior. Studies that have tested the use of nudges to improve healthy behavior, for example, find that people are more likely to change their diet and exercise based on gentle reminders and recommendations. Nudges can be effective because they give people control while also giving them useful information. Ultimately the recipient of the nudge still decides whether to use the feedback provided. Nudges don’t feel coercive; instead, they’re potentially empowering.
Option 2: Crowdsourcing
Facebook could also use the power of crowdsourcing to help evaluate news sources and indicate when news that is being shared has been evaluated and rated. One important challenge with fake news is that it plays to how our brains are wired. We have mental shortcuts, called cognitive biases, that help us make decisions when we don’t have quite enough information (we never do), or quite enough time (we never do). Generally these shortcuts work well for us as we make decisions on everything from which route to drive to work to what car to buy But, occasionally, they fail us. Falling for fake news is one of those instances.
This can happen to anyone – even me. In the primary season, I was following a Twitter hashtag on which then-primary candidate Donald Trump tweeted. A message appeared that I found sort of shocking. I retweeted it with a comment mocking its offensiveness. A day later, I realized that the tweet was from a parody account that looked identical to Trump’s Twitter handle name, but had one letter changed.
I missed it because I had fallen for confirmation bias – the tendency to overlook some information because it runs counter to my expectations, predictions or hunches. In this case, I had disregarded that little voice that told me this particular tweet was a little too over the top for Trump, because I believed he was capable of producing messages even more inappropriate. Fake news preys on us the same way.
Another problem with fake news is that it can travel much farther than any correction that might come afterwards. This is similar to the challenges that have always faced newsrooms when they have reported erroneous information. Although they publish corrections, often the people originally exposed to the misinformation never see the update, and therefore don’t know what they read earlier is wrong. Moreover, people tend to hold on to the first information they encounter; corrections can even backfire by repeating wrong information and reinforcing the error in readers’ minds.
If people evaluated information as they read it and shared those ratings, the truth scores, like the nudges, could be part of the Facebook application. That could help users decide for themselves whether to read, share or simply ignore. One challenge with crowdsourcing is that people can game these systems to try and drive biased outcomes. But, the beauty of crowdsourcing is that the crowd can also rate the raters, just as happens on Reddit or with Amazon’s reviews, to reduce the effects and weight of troublemakers.
Option 3: Algorithmic social distance
The third way that Facebook could help would be to reduce the algorithmic bias that presently exists in Facebook. The site primarily shows posts from those with whom you have engaged on Facebook. In other words, the Facebook algorithm creates what some have called a filter bubble, an online news phenomenon that has concerned scholars for decades now. If you are exposed only to people with ideas that are like your own, it leads to political polarization: Liberals get even more extreme in their liberalism, and conservatives get more conservative.
The filter bubble creates an “echo chamber,” where similar ideas bounce around endlessly, but new information has a hard time finding its way in. This is a problem when the echo chamber blocks out corrective or fact-checking information.
If Facebook were to open up more news to come into a person’s newsfeed from a random set of people in their social network, it would increase the chances that new information, alternative information and contradictory information would flow within that network. The average number of friends in a Facebook user’s network is 338. Although many of us have friends and family who share our values and beliefs, we also have acquaintances and strangers who are part of our Facebook network who have diametrically opposed views. If Facebook’s algorithms brought more of those views into our networks, the filter bubble would be more porous.
All of these options are well within the capabilities of the engineers and researchers at Facebook. They would empower users to make better decisions about the information they choose to read and to share with their social networks. As a leading platform for information dissemination and a generator of social and political culture through talk and information sharing, Facebook need not be the ultimate arbiter of truth. But it can use the power of its social networks to help users gauge the value of items amid the stream of content they face.
Michael Vadon/flickr, CC BY-SA Three weeks after Donald Trump won a historic victory to become the 45th president of the United States, the media postmortems continue. In particular, the role played by the media and technology industries is coming under heavy scrutiny in the press, with Facebook’s role in the rise of fake news currently enjoying…
What President-elect Donald Trump and the Republican sweep of government will mean for K-12 education priorities over the next four years is not entirely clear yet. However, policy statements and administration selections so far indicate “school choice” will top the agenda.
Betsy DeVos, Trump’s nominee for education secretary, has been known to be an advocate of school choice initiatives: DeVos has supported voucher programs that allow families to use taxpayer money to enroll in private and religious schools. She also promoted charter school legislation that offers students choices outside of traditional public schools.
Vice President-elect Mike Pence too has a history as governor of Indiana of promoting school choice policy. Indiana not only is ranked as having the most favorable policy provisions for charter schools by a prominent charter schooling advocacy group, but it is among the 25 states employing a type of charter school unfamiliar to many folks across the United States: the cyber charter school.
Unlike the usual charter school, the cyber version is typically delivered to students online wherever they may live, so long as they are residents of the state in which the cyber charter school operates. Cyber charter schools have been growing in states that have school choice policy.
Our research, along with a body of academic work, suggests that the public should be concerned about an expansion of the cyber charter schooling model.
Here’s why.
What is a cyber charter school?
Charter schools are privately managed K-12 schools that utilize public money. The funds for charter schools are removed from regular public schooling budgets and paid to various private firms and organizations (and sometimes other parts of a state’s education system) to provide a wider choice of schools.
In the cyber version of the charter school, instruction is typically delivered to the students online wherever they may live, so long as they are residents of the state in which the cyber charter school operates. The model of these schools could vary – some use a hybrid delivery model (online and in person), although most are entirely online. Students receive course material, lessons and tests on their computer at home (usually the computer is also provided with state funds).
As with traditional charter schools, the general idea behind cyber charter schools is to allow families and students to have a choice other than their local public school.
A 2015 annual report prepared by a consulting group that tracks online school practice and is often cited by scholars to describe cyber charter school enrollment shows that in 2014-2015 there were 275,000 students in cyber charter schools across 25 states. In some states, tens of thousands of students enroll in cyber charter schools. In Pennsylvania, for example, more than 36,000 students enrolled in cyber charter schools during 2014-2015.
Where do the students come from?
One of the goals of recent scholarship has been to understand who are the students who enroll in these schools and why do they do so.
Pupils from the Einstein Academy Charter School rally in the State Capitol rotunda in Harrisburg, Pennsylvania to protest state policies.
AP Photos/ Paul Vathis
The National Education Policy Center (NEPC) conducts an analysis of cyber charter school students every year. The most recent report shows that in 2013-2014, cyber charter schools, compared to the national average, had higher percentages of white students and lower percentages of free and reduced lunch students.
However, since these numbers are nationally aggregated and not every state has a cyber charter school, we believe comparing national cyber charter school averages to all students nationally may be problematic. Our research at Penn State on cyber charter schools has examined enrollments within Pennsylvania and shows that the picture is more complicated.
In our study of enrollments in Pennsylvania, we found that the majority of students in cyber charter schools are indeed white, but they match the racial demographics of the state. Similar results have been seen in Ohio.
Furthermore, in a another study in Pennsylvania we found that it was the economically disadvantaged students who were more likely to enroll in a cyber charter school.
An obvious question to ask is whether parents would have homeschooled their children had the cyber charter school option not existed. The best estimate comes from an internal report of one of the largest national providers of cyber charter schools: The report found that a small percent – 13.6 percent of cyber school students in those schools – were previously homeschooled.
So, what motivates a majority of parents to enroll their children in these schools?
Penn State researchers who interviewed parents who enrolled their children into cyber charter schools found that parents thought these schools were better customized to their children’s needs, carried little financial risk and were possibly the last hope for their child to succeed in school.
Concerns about cyber charters
Despite the hope that many parents hold out for this new educational option, the performance of cyber charter schools has consistently, and often drastically, lagged behind the performance of their brick-and-mortar school counterparts.
Research about cyber charter school performance outcomes paints a dismal picture linked to test-based outcomes. For example, a recent report from the Center for Research on Education Outcomes (CREDO), a policy analysis center based in Stanford University, used a technique to match cyber students to an academic and demographic “twin.”
Researchers have been concerned about the learning in cyber charter schools.
They did this matching twice, once to compare individual gains of cyber charter students to their statistical twin in brick-and-mortar charter schools and once to compare them to their statistical twin in a brick-and-mortar district school.
Across all racial and poverty status groups of students in the study, the majority of cyber charter school students showed poor learning growth when compared to their matched twin. This was true in both math and reading when students were compared to charter and traditional students.
Researchers found these trends across almost all states that they studied: They found lower learning growth in reading in 14 out of the 17 states, and 17 out of 17 states in math. In their report they noted that improved academic outcomes for a student in a cyber charter school was “the exception rather than the rule.”
This research is consistent with others that examine the academic outcomes of cyber charter schools. Studies have looked at cyber charter school outcomes in Pennsylvania and in Ohio. These studies provide similar results about extremely lower learning growth in cyber charter schools in these state contexts when compared to other schools.
What is of further concern as one legal scholar, Susan DeJarnatt, has shown is that cyber charter schools may not have all the safeguards needed to protect the sector from fraud. Already federal authorities have indicted two of the five “mega-cyber” providers (a school that enrolls more than 2,000 students) in Pennsylvania of fraud.
Outside of the scholarship conducted about fraud in Pennsylvania, a review of hundreds of news stories revealed dozens of state audits across 20-plus states. These news stories repeatedly and overwhelmingly raise concerns about funding and academic accountability across all state contexts, matching the concerns that have emerged in the academic literature.
Looking forward
Following such reports of poor academic outcomes and questionable ethical practices, our research team at Penn State has decided to continue to study the cyber charter school movement in Pennsylvania to find out more.
Our current research examines how cyber charter schools have influenced the entire education system in Pennsylvania.
However, based on the body of academic work that is currently available, we believe while it may be logical to allow online learning in certain circumstances, the cyber charter model is not the appropriate model. And the new education secretary Betsy DeVos might want to exercise caution.
Even if you never use Twitter, there will be no escaping Donald Trump on your mobile phone.
CNET notes that once Trump becomes president in January, he will gain the power to send out nationwide, unblockable alerts to every mobile number in the United States.
In other words, if President Trump wants the entire country to know that Rosie O'Donnell is "a real loser" or that "dishonest CNN" is even worse than the "failing New York Times," he'll be able to tell everyone in the country about it -- and there will be no way to block him out.
Thankfully, these alerts are typically only used as emergency notifications, so it's unlikely that Trump will use them to broadcast his feuds with assorted celebrities. Nonetheless, given how unconventional his campaigning and governing style have proven to be so far, it can't be completely ruled out.
Germany's spy chief warned that Russian hackers may target next year's German election with campaigns of misinformation that could undermine the democratic process, echoing concerns voiced by the country's domestic intelligence director.
U.S. intelligence officials warned in the run-up to the Nov. 8 presidential election won by populist outsider Donald Trump of efforts to manipulate the vote that they believed was backed by Russian authorities. Russian officials denied any such effort.
In an interview published on Tuesday in the Sueddeutsche Zeitung, Bruno Kahl, the new head of Germany's BND foreign intelligence service, said there were indications that Russia may be behind the interference.
"We have evidence that cyber attacks are taking place that have no other purpose than triggering political uncertainty," he said. "The perpetrators are interested in delegitimising the democratic process as such, no matter who that subsequently helps."
The head of Germany's domestic BfV intelligence agency told Reuters earlier in November that authorities were concerned that Russia may seek to interfere in Germany's national elections through the use of misleading news stories.
Chancellor Angela Merkel has also warned that social bots - software programs to sway opinion on influential social media sites by spreading fake news - might manipulate the voting.
She faces a growing challenge from the anti-immigrant, populist AfD party, which has said the European Union should drop sanctions imposed on Russia and that Berlin should take a more balanced position towards Moscow.
Some critics say a proliferation of fake news helped sway the U.S. election in the favor of the Republican Trump, who has pledged to improve relations with Russian President Vladimir Putin. Defeated Democratic candidate Hillary Clinton accused Trump of being a Putin "puppet".
Kahl said Germany among other countries in Europe was a particular target of misinformation campaigns.
"A kind of pressure is being exercised on public discourse and democracy here which is unacceptable," he said.
While intelligence agencies used to focus on countries, today the challenges and the threats are more varied and the actors more diverse, Kahl added.
Deutsche Telekom has blamed disruptions experienced by hundreds of thousands of its customers on Monday on a failed hacking attempt to hijack consumer router devices for the purpose of a wider Internet attack.
(Reporting by Caroline Copley; editing by Mark Heinrich)
Retired Navy officer Jim Wright of the liberal blog Stonekettle Station said this week that he was banned by Facebook for speaking out against supporters of the Nazi Party.
In a blog post on Wednesday, Wright explained that Facebook had notified him that his account had been suspended for "violation of community standards."
"The community standard I violated is apparently the one where you’re not allowed to criticize actual, no fooling, Nazis," he wrote. "That’s right, I was banned for criticizing an actual Nazi."
According to Wright, Facebook banned him for a post in which he spoke out against several Twitter users who were defending the history of the Nazi Party.
"I've got hundreds of angry messages here telling me to stop calling Trump supporters fascists," Wright said in the post that got him banned. "And I would, except for the part where I keep running into ACTUAL FUCKING NAZIS."
Wright continued:
So again, you don't want to be called a Nazi?
Then stop hanging out with actual Nazis. Just stop it. Stop it. Stop it.
Stop hanging out with Nazis. Don't be polite to Nazis. Don't think that the First Amendment means you have to be respectful of Nazis. Don't pretend Nazis have a valid point of view. They're Nazis.
Stop standing next to Nazis.
Stop acting like Nazis.
Stop cheering Nazis.
Stop voting for the people Nazis vote for.
They're fucking NAZIS. You don't have to be polite to them. It's okay to hate them. They're fucking NAZIS.
Wright speculated that his account was locked by Facebook for 24 hours after people who opposed his post flagged it as "spam."
"The people who do this sort of thing, do so specifically in order to silence people they don’t like, not because they are actually offended," he noted. "My ban from the platform is the result of Facebook’s lousy architecture, which lets bullies and harassers abuse Facebook’s automated system – a system that was supposedly put in place to make Facebook safer – and I have absolutely no recourse to protest or appeal."
Wright told Raw Story on Wednesday afternoon that his Facebook account was still locked.
"I know they are aware of the situation," he said. "But I've received no response from Facebook either formally or via informal channels."
Wright has promised that he will not back down if and when Facebook reinstates his account -- even if it means he is banned again.
"Those who know me, know that I am a veteran who fought under the flag of the United States of America for more than 20 years, can probably guess which way I’ll go," he wrote. "Given America’s new acceptance of fascism, I suspect platforms like Facebook and Twitter will either have to become more accommodating of actual fascist ideology and less tolerant of people like me, or risk going to the wall themselves – especially given that our new president has made it very clear that he intends to directly control how the media, including social media, reports on his administration."
A cybersecurity expert is calling for a recount of the presidential election in three key states because he knows how easy it would be to hack the vote -- and because it's happened before.
J. Alex Halderman, a computer science professor at the University of Michigan, urged Hillary Clinton to call for a forensic audit of the voting machines in Wisconsin, Michigan and Pennsylvania -- three states surprisingly won by Donald Trump.
Hackers linked by U.S. officials to the Russian government attacked the Democratic National Committee, Clinton's campaign chairman, and voter registration systems in Arizona and Illinois, and there's evidence hackers tried to break into election offices in several other states.
Halderman said it would be quite easy for hackers to identify states where polling data suggested a close result, and then install malware in voting machines that would shift a small percentage of the ballots toward a certain candidate.
That malware could be designed to remain inactive -- and thus undetected -- during testing prior to the election, and then erase itself after the polls close, he said.
Halberman cautions that he does not believe a cyberattack caused Trump to perform better in those three states than polls predicted, but he wants Clinton to call for a recount just to be sure.
U.S. voting machines are almost laughably easy to hack into, and pro-Russian hackers who call themselves CyberBerkut maliciously committed acts of "wanton destruction" in 2014 as they attempted to rig Ukraine's national election, according to a Christian Science Monitor report.
Four days before the May 25, 2014, election, the hackers broke into Ukraine's central election computers, where they deleted files, rendered the vote-counting system inoperable and destroyed the network infrastructure, and they proved what they'd done by dumping emails and other documents online.
Government officials were able to repair the damage by the following day using backups, but cyber experts discovered and removed a virus that had been installed on central election computers just 40 minutes before the results were scheduled to be announced live on television.
The malicious software, if it had not been removed in time, would have showed ultra-nationalist Right Sector party leader Dmytro Yarosh as the winner with 37 percent of the vote and Petro Poroshenko, who actually won, with 29 percent.
Russian Channel One aired a bulletin that same night declaring Yarosh had won with -- you guessed it -- 37 percent of the vote.
In actuality, Yarosh garnered just 1 percent of the vote.
But the hackers weren't done.
Early the following morning, after polls had closed and results came in from election districts around the county, internet links submitting those data were hit by a distributed denial of service, or DDoS, attacks.
That blocked election results for about two hours, and an American cybersecurity company linked that DDoS attack to CyberBerkut.
International observers ultimately declared the Ukraine vote had been genuine, but Halderman said the attacks showed how vulnerable electronic voting machines are to tampering.
"I know I may sound like a Luddite for saying so, but most election security experts are with me on this: paper ballots are the best available technology for casting votes," Halderman said.
That's also why a vote audit is necessary, he said.
"The only way to know whether a cyberattack changed the result is to closely examine the available physical evidence — paper ballots and voting equipment in critical states like Wisconsin, Michigan, and Pennsylvania," Halderman said. "Unfortunately, nobody is ever going to examine that evidence unless candidates in those states act now, in the next several days, to petition for recounts."
Facebook chief executive Mark Zuckerberg called on world leaders Saturday to forge a more "connected" planet, something he said was under threat after Donald Trump's US election win and Britain's "Brexit" vote.
Zuckerberg said in a keynote speech at an Asia-Pacific leaders' summit that while globalization and interconnectedness have their problems, the world must fight the urge to "disconnect."
"As we are learning this year in election after election, even if globalization might (boost) prosperity, it also creates inequality. It helps some people and it hurts others," he said.
The 32-year-old billionaire said there was a "fundamental choice" to make in reacting to that inequality.
"We can disconnect, risk less prosperity and hope jobs that are lost come back. Or we can connect more, try to do more great things, try to work on even greater prosperity and then work to aggressively share that prosperity with everyone."
The second option is better, but also harder, he said in his speech at the Asia-Pacific Economic Cooperation (APEC) summit in Lima, Peru.
"Disconnecting is relatively easy. But connecting requires making big investments in infrastructure and generating the political will to make hard long-term decisions," he said.
Facebook has made headlines with its projects on connectivity and internet access.
The social network has developed solar-powered drones and a satellite to beam internet service to remote areas.
The company has helped more than 40 million people get online, Zuckerberg said.
His comments Saturday came amid deep global uncertainty in the wake of the unexpected poll results in the US and Britain.
Trump and the Brexit camp both appealed to working-class voters who feel threatened by globalization and immigration, running on a populist politics of disillusionment with an increasingly borderless world.
Trump vows to protect American jobs from cheaper labor overseas, while Brexit campaigners promise British workers will fare better outside the European Union than in it.
Facebook, the world's largest social network with 1.79 billion users, has been criticized by some as helping Trump to victory by giving a platform to fake election news and extreme-right blogs with untruthful attacks on Trump's opponent Hillary Clinton.
Zuckerberg has dismissed claims his company influenced the vote as "pretty crazy."