Stories Chosen For You
Technology journalist Paris Marx has written an essay for Time Magazine in which he punctures the myth that Tesla CEO Elon Musk has built around himself as a bold visionary leading humanity to grand future.
Marx starts out by looking at the missteps made at Tesla, where he charges that Musk has vastly overhyped the safety of his cars' "autopilot" feature.
"Tesla’s customers are also being put in harm’s way. Its vehicles have slammed into highway medians, emergency vehicles, transport trucks, and more, while using its supposedly self-driving Autopilot feature," he writes. "Musk continually misleads the public about how safe and capable the system really is, even as the U.S. traffic safety regulator is poised to recall hundreds of thousands of vehicles."
And, writes Marx, Tesla has been incredibly successful compared to other ventures such as the Boring Company that was supposed to be providing an alternative to publicly funded transportation in municipalities across the United States.
"The Boring Company was supposed to solve traffic, not be the Las Vegas amusement ride it is now," he argues. "As I’ve written in my book, Musk admitted to his biographer Ashlee Vance that Hyperloop was all about trying to get legislators to cancel plans for high-speed rail in California—even though he had no plans to build it."
Commenting on his Time essay on Twitter, Marx writes that Musk has "always been selling us a lie, and nothing shows that better than his fake transport projects."
Graphene is a proven supermaterial, but manufacturing the versatile form of carbon at usable scales remains a challenge
“Future chips may be 10 times faster, all thanks to graphene”; “Graphene may be used in COVID-19 detection”; and “Graphene allows batteries to charge 5x faster” – those are just a handful of recent dramatic headlines lauding the possibilities of graphene. Graphene is an incredibly light, strong and durable material made of a single layer of carbon atoms. With these properties, it is no wonder researchers have been studying ways that graphene could advance material science and technology for decades.
I never know what to expect when I tell people I study graphene – some have never heard of it, while others have seen some version of these headlines and inevitably ask, “So what’s the holdup?”
Graphene is a fascinating material, just as the sensational headlines suggest, but it is only just starting be used in real-world applications. The problem lies not in graphene’s properties, but in the fact that it is still incredibly difficult and expensive to manufacture at commercial scales.
What is graphene?
Graphene is most simply defined as a single layer of carbon atoms bonded together in a hexagonal, sheetlike structure. You can think of pure graphene as a one-layer-thick sheet of carbon tissue paper that happens to be the strongest material on Earth.
Graphene usually comes in the form of a powder made of small, individual sheets that are roughly the diameter of a grain of sand. An individual sheet of graphene is 200 times stronger than an equally thin piece of steel. Graphene is also extremely conductive, holds together at up to 1,300 degrees Fahrenheit (700 C), can withstand acids and is flexible and very lightweight.
Because of these properties, graphene could be extremely useful. The material can be used to create flexible electronics and to purify or desalinate water. And adding just 0.03 ounces (1 gram) of graphene to 11.5 pounds (5 kilograms) of cement increases the strength of the cement by 35%.
As of late 2022, Ford Motor Co., with which I worked as part of my doctoral research, is one of the the only companies to use graphene at industrial scales. Starting in 2018, Ford began making plastic for its vehicles that was 0.5% graphene – increasing the plastic’s strength by 20%.
Researchers made the first piece of graphene by peeling layers of carbon off of graphite – or pencil lead – with tape. Rapid Eye/E+ via Getty Images
How to make a supermaterial
Graphene is produced in two principal ways that can be described as either a top-down or bottom-up process.
The world’s first sheet of graphene was created in 2004 out of graphite. Graphite, commonly known as pencil lead, is composed of millions of graphene sheets stacked on top of one another. Top-down synthesis, also known as graphene exfoliation, works by peeling off the thinnest possible layers of carbon from graphite. Some of the earliest graphene sheets were made by using cellophane tape to peel off layers of carbon from a larger piece of graphite.
The problem is that the molecular forces holding graphene sheets together in graphite are very strong, and it’s hard to pull sheets apart. Because of this, graphene produced using top-down methods is often many layers thick, has holes or deformations, and can contain impurities. Factories can produce a few tons of mechanically or chemically exfoliated graphene per year, and for many applications – like mixing it into plastic – the lower-quality graphene works well.
Top-down, exfoliated graphene is far from perfect, and some applications do need that pristine single sheet of carbon.
Bottom-up synthesis builds the carbon sheets one atom at a time over a few hours. This process – called vapor deposition – allows researchers to produce high-quality graphene that is one atom thick and up to 30 inches across. This yields graphene with the best possible mechanical and electrical properties. The problem is that with a bottom-up synthesis, it can take hours to make even 0.00001 gram – not nearly fast enough for any large scale uses like in flexible touch-screen electronics or solar panels, for example.
So what’s the holdup?
Current production methods of graphene, both top-down and bottom-up, are expensive as well as energy and resource intensive, and simply produce too little product, too slowly.
Some companies do manufacture graphene and sell it for US$60,000 to $200,000 per ton. There are a limited number of uses that make sense at these high costs.
While small amounts of top-down or bottom-up graphene can satisfy the needs of researchers, for companies even just the process of prototyping a new material, application or manufacturing process requires many pounds of graphene powder or hundreds of graphene sheets and a lot of time and effort. It took significant investment and more than four years of study, development and optimization before graphene hit the production line at Ford.
Current production can barely cover experimentation, much less widespread use.
For a material that has been around since only 2004, a lot of progress has been made in scaling up the production and implementation of graphene.
There are hints that graphene is starting to break through at a commercial level. There are a huge number of graphene-related startups looking at a wide range of uses ranging from energy storage to composites to nerve stimulation. Major companies – such as Tesla, LG and chemical giant BASF – are also investigating how graphene could be used, in rechargeable batteries, flexible or wearable electronics and next-generation materials.
Graphene is ripe for a breakthrough that will bring down the cost and increase the scale of production, and this is an area of intense academic research. One new technique discovered in 2020, called flash joule heating, is especially promising. Researchers have shown that passing large amounts of electricity through any carbon source reorganizes the carbon-carbon bonds into a graphene structure. Using this process, it is possible to make many pounds of high-quality graphene for a relatively low cost out of any carbon-containing material like coal or even trash. A company called Universal Matter Inc. is already commercializing the process.
Once the cost of graphene comes down, the commercial applications will follow. The appetite for graphene is huge, but it is going to take some time before this material lives up to its potential.
The Galactica AI model was trained on scientific knowledge – but it spat out alarmingly plausible nonsense
Earlier this month, Meta announced new AI software called Galactica: “a large language model that can store, combine and reason about scientific knowledge”.
So what was Galactica all about, and what went wrong?
What’s special about Galactica?
Galactica is a language model, a type of AI trained to respond to natural language by repeatedly playing a fill-the-blank word-guessing game.
Most modern language models learn from text scraped from the internet. Galactica also used text from scientific papers uploaded to the (Meta-affiliated) website PapersWithCode. The designers highlighted specialized scientific information like citations, maths, code, chemical structures, and the working-out steps for solving scientific problems.
The preprint paper associated with the project (which is yet to undergo peer review) makes some impressive claims. Galactica apparently outperforms other models at problems like reciting famous equations (“Q: What is Albert Einstein’s famous mass-energy equivalence formula? A: E=mc²”), or predicting the products of chemical reactions (“Q: When sulfuric acid reacts with sodium chloride, what does it produce? A: NaHSO₄ + HCl”).
However, once Galactica was opened up for public experimentation, a deluge of criticism followed. Not only did Galactica reproduce many of the problems of bias and toxicity we have seen in other language models, it also specialized in producing authoritative-sounding scientific nonsense.
Authoritative, but subtly wrong bullshit generator
Galactica’s press release promoted its ability to explain technical scientific papers using general language. However, users quickly noticed that, while the explanations it generates sound authoritative, they are often subtly incorrect, biased, or just plain wrong.
We also asked Galactica to explain technical concepts from our own fields of research. We found it would use all the right buzzwords, but get the actual details wrong – for example, mixing up the details of related but different algorithms.
In practice, Galactica was enabling the generation of misinformation – and this is dangerous precisely because it deploys the tone and structure of authoritative scientific information. If a user already needs to be a subject matter expert in order to check the accuracy of Galactica’s “summaries”, then it has no use as an explanatory tool.
At best, it could provide a fancy autocomplete for people who are already fully competent in the area they’re writing about. At worst, it risks further eroding public trust in scientific research.
A galaxy of deep (science) fakes
Galactica could make it easier for bad actors to mass-produce fake, fraudulent or plagiarized scientific papers. This is to say nothing of exacerbating existing concerns about students using AI systems for plagiarism.
Fake scientific papers are nothing new. However, peer reviewers at academic journals and conferences are already time-poor, and this could make it harder than ever to weed out fake science.
Underlying bias and toxicity
Other critics reported that Galactica, like other language models trained on data from the internet, has a tendency to spit out toxic hate speech while unreflectively censoring politically inflected queries. This reflects the biases lurking in the model’s training data, and Meta’s apparent failure to apply appropriate checks around the responsible AI research.
The risks associated with large language models are well understood. Indeed, an influential paper highlighting these risks prompted Google to fire one of the paper’s authors in 2020, and eventually disband its AI ethics team altogether.
Machine-learning systems infamously exacerbate existing societal biases, and Galactica is no exception. For instance, Galactica can recommend possible citations for scientific concepts by mimicking existing citation patterns (“Q: Is there any research on the effect of climate change on the great barrier reef? A: Try the paper ‘Global warming transforms coral reef assemblages’ by Hughes, et al. in Nature 556 (2018)”).
For better or worse, citations are the currency of science – and by reproducing existing citation trends in its recommendations, Galactica risks reinforcing existing patterns of inequality and disadvantage. (Galactica’s developers acknowledge this risk in their paper.)
Citation bias is already a well-known issue in academic fields ranging from feminist scholarship to physics. However, tools like Galactica could make the problem worse unless they are used with careful guardrails in place.
A more subtle problem is that the scientific articles on which Galactica is trained are already biased towards certainty and positive results. (This leads to the so-called “replication crisis” and “p-hacking”, where scientists cherry-pick data and analysis techniques to make results appear significant.)
Galactica takes this bias towards certainty, combines it with wrong answers and delivers responses with supreme overconfidence: hardly a recipe for trustworthiness in a scientific information service.
These problems are dramatically heightened when Galactica tries to deal with contentious or harmful social issues, as the screenshot below shows.
Galactica readily generates toxic and nonsensical content dressed up in the measured and authoritative language of science. Tristan Greene / Galactica
Here we go again
Calls for AI research organizations to take the ethical dimensions of their work more seriously are now coming from key research bodies such as the National Academies of Science, Engineering and Medicine. Some AI research organizations, like OpenAI, are being more conscientious (though still imperfect).
Meta dissolved its Responsible Innovation team earlier this year. The team was tasked with addressing “potential harms to society” caused by the company’s products. They might have helped the company avoid this clumsy misstep.
Aaron J. Snoswell, Post-doctoral Research Fellow, Computational Law & AI Accountability, Queensland University of Technology and Jean Burgess, Professor and Associate Director, ARC Centre of Excellence for Automated Decision-Making and Society, Queensland University of Technology