Long story short, Pirate Wires exists because people sign up. If you like what we're doing here, get your friends involved for a free subscription, and if you haven't already, subscribe or die.
The singularity is near (one (maybe)).Is it robot heaven or robot hell? Much thought has been given to this question for over half a century, but increasingly it seems that the answer is "neither nor neither". Today's artificial intelligence, while incredibly impressive, is nowhere near advanced enough to trigger an accidental demise in either direction. With no one in the field of artificial intelligence to illustrate a cohesive roadmap for the technology beyond its evolution, it seems we are heading into a world of massive disruption for no particular reason or purpose. In other words, I now look forward to a future of chaos that is neutral but colorful and fun.
And 2020, open AIGPT-3 released, an incredibly intelligent chatbot capable of roughly predicting the forward and backward flow of human speech. But the pace of advance was rapid. Two years later, androids invented electric sheep that they could use to do completely "new" tests.ChatGPT, and completely "new" arts withGIVE HIM,Occasionally, youstable diffusion. Millions of parts were generated. A demon may have been summoned. it was all a questionI wrote about myselfA few months ago, and whatever, no big deal, you can't make an omelette without occasionally opening a portal to hell. It is what it is.
Faced with such powerful tools, technologists, public intellectuals, and policymakers naturally began to question the potential impact of generative technology. First, in the "very, very bad" column, we come to artificial general intelligence (AGI) and with it the excruciating dangers of "adaptation" that a poorly trained machine could accidentally turn us into. Try executing commands and saving energy (for example)? Here, the famous rationalist Eliezer Yudkowsky is of the general opinion that, given advances in machine learning, human survival is impossible in the long run, butwe should at least die with dignity– extraordinarily repressed concerns.
Elieser Yudkowski@ESYudkowsky
To throw some cold water on the latest wave of AI hype: I could be wrong, but I suspect we're *not* getting AGI just by scaling ChatGPT, and it's taking *surprisingly* time from here. Mothers who become pregnant today may have a good chance that their children will attend kindergarten.
4:26 am ∙ December 7, 2022
6.401pleasures360You will return
"All this progress is impressive," Eliezer seemed to argue, "and don't get me wrong, we're still going to die." But ChatGPT is not AGI, nor are we as close to AGI as Eliezer thought. It seems like we have at least five years before our children are accidentally mass murdered. Finally some good news.
However, while we wait for the apocalypse, we must deal with the implications of technology poised to replace much of the American workforce, something proponents deny and cite in favor of AI. This is old jargon that has just been confined to the narrow destinies of writers and artists, presumably given that the overwhelming majority of recent AI fruits consist of words and art. The question is: how will a seemingly creative tool like ChatGPT or Midjourney affect our "creative class"? Last week, two tech titans put their positions out in writing in a particularly notable exchange.
First, Paul Graham seemed to be implying that magazines should ban AI-generated text. But in this speculative (almost?) future world of robotic writing, if magazines didn't ban bots,they should at least recognize the synthetic authorsor co-authors by name. People need to know when they're reading a heavily machine-informed opinion when it's not written directly by a machine.
Later that day, in what appeared to be a response to Graham, Marc Andreessen commented that artificial intelligence would just do that.make us betterWriters who continue to argue that we are approaching aAI Golden age of language.
Graham responded directly. He would never use technology, he said, because he's a good writer and good writers don't express themselves.in someone else's words. A fair point! But then again...
What do people mean when they mention ChatGPT as a potential tool for authors? My feeling is that the future Marc envisions resembles the future I played with.Demonic. There, I speculated that one use of Steelman's technology might be to train a language model for my own work, feed it prompts, and tell it to use my voice to produce preliminary essays about current events or topics. After a second or third draft, I would publish in two hours, which used to take two days.
Would it be wrong somehow? Should I be ashamed to use such a tool? If my model is trained on my own work, am I really speaking someone else's words, as Graham argued? Would this tool make me a worse writer? It certainly wouldn't be in a mechanical sense. But is there another quality of writing worth fighting for or defending? These are interesting questions for which I don't have an answer, but I know that using the tool is inevitable and that the future is not as simple as “journalists write” or “replace robots”. At scale, the information landscape will look strange along with everything else.
The future will be strange.
One of my favorite science fiction fallacies follows the study of a new or theoretical technology in a bubble. For example, we get deadly robots hovering in the sky, firing lasers at homeless painters and screaming musicians in a future hell full of evil, cannibalistic empires. Of course, I love it, I'm looking forward to the movie. But separate demonstrations of antigravity and laser technology seem to imply a future of unlimited energy and matter dominance, which in itself does not imply the need to hoard resources. How can there be poor people in such a rich world?
I've always been amazed by this kind of failure of the imagination: teleportation, which somehow fails to flatten the world into a single culture; Replicators capable of printing ice cream from scratch, somehow unable to build invincible ships or new planets; genetically engineered superhumans in pursuit of organ patterns, irrelevant in a world of advanced synthetic biology, where growing organs from scratch is as easy as grocery shopping.
In the past decade, there have been two major films set in a world of artificial intelligence: Alex Garland'sex machina(finally British) and Spike Jonze'sIs it over there(Happy American). I was aIs it over thereMan, since it came out and first read as the complete opposite of Garland's dystopian mind. I recently realized that I was wrong. They are the same movie. Or at least they share the same critical flaw.
Noex machina, a mad techie builds an AGI fembot in a remote location and invites a thirsty halterneck to test its conscience. The fembot mocks the mustache, kills its inventor and escapes. Evil robot! At theIs it over there, a disembodied AGI that doesn't quite look like us (although it looksklangsexy) simply supports the protagonist of the film in his sad life. The AGI and all of their AGI friends eventually ascend to the human plane in a steamy, cosmic thought cloud. They leave their humans to make love, and we all have a beautiful, tender cry. Good bot!
While the story ofIs it over thereI always felt like I was more connected to what people in tech are really trying to build, both movies show that AGI is essentially predictable in a human binary. The only real question for artificial intelligence is whether it will be good (utopia) or bad (dystopia) – pretty much the same questions we humans ask ourselves. But they are not people. None of the second-order effects of technology are explored in any of the films.Is it over therefails in this regard, particularly when, in the most egregious example, the main AGI assists the protagonist of the story in his work (greeting card author), who in an AGI world is inexplicably not only capable, but capable a million times faster than the humans . can price for next to nothing.
Our future is not a world of robots (still) killing people, or even people (still) killing people with robots. Our future is a world of robots mixing with humans, taking over humans, silently shaping the world of humans while being run by a very small handful of powerful human programmers. There will be hybrid work. There will be strange second- and third-order effects at the level of human culture, religion, and politics. There will be a myriad of potential applications for AGI in each area, each presenting new potential applications and, in general, their impact is impossible to predict. The future will be chaotic. The future will be confusing. Dystopian for some and utopian for others, I'm starting to think the future will look a little like Blade Runner.
Robots, capable of answering basic questions that are increasingly difficult to distinguish from living humans, will quietly begin to master many previously "human" tasks: executive support, call center work, news reporting, research, architecture, writing code, bookkeeping, bookkeeping, painting, composing, designing. There's no reason why almost any legal job can't be replaced, for example, with a position that's about to be evaluated in a court near you.
Earlier in the week, Josh Browder revealed to the world his plan for robotic lawyers:
Joshua Broder@jbrowder1
DoNotPay will pay any lawyer or person with an upcoming US Supreme Court case $1,000,000 to wear AirPods and have our robotic lawyer defend the case by repeating exactly what he says. (1/2)
4:57 am ∙ January 9, 2023
15.972pleasures1.935You will return
On the one hand, AI litigation is incredible news. Ensuring excellent legal representation for everyone at close to zero cost is an obvious asset. But what information are lawyer bots specifically trained on? What if we take this to the Federal Supreme Court?
What does an AI judiciary look like? Most of what happens at the highest level of American law comes down to interpretation, and an AI can only be trained to interpret laws based on previous interpretations. So what interpretations are we nurturing? There is no bridge between Antonin Scalia's originalist philosophy and Kentanji Brown Jackson's ad hoc approach. As long as humans rule the world, these are decisions a human must make, whether or not they are officially "made" by a robot.
Veiled moderation will define our AI room watchers as they separate science fact from fiction on our social media platforms, debate the precise definition of “hate speech” and assess what specifically constitutes a threat of violence. Veiled restraint of this sort will also govern our children's AI instructors, our doctors, and the programs responsible for making the split-second decision about who lives and who dies in a driverless car accident on the highway. We will have the facade of impartiality, but the rule of our programming kings.
A persistent problem I see with today's "techno-optimists" is their abject failure to develop a compelling vision of a future in which sufficiently advanced artificial intelligence dominates. Most companies working in this space resist even naming a business goal, let alone a global goal. These are extremely smart people working on a huge problem whose solution will generate so much value that we can all assume ungodly sums cannot go uncaptured. But at least until the end of the clip, any introduction that changes the paradigm of a powerful technology in the world without a clear purpose can only guarantee chaos. Stairway for some, destruction for others.
There will be dangerous apps and benign apps. There will be bitchy digital school teachers being quietly trained by anti-American ideologues, and there will be gigantic, sexy girls with holograms gracing the Tokyo skyline. There could literally be bladerunners, our 21st century information policemen, hell-bent on unearthing the hidden bots and revealing their motives to the world. Maybe it is, with the help of my clone army. Again, I'm not promising that any of this is good or bad. I can only promise entertainment. But we should probably preempt the human component, because if we strip these issues of all their technological quirks, it's the humans who are underneath.
Artificial intelligence is a centralizing technology. Currently, powerful language models are exorbitantly expensive and controlled by a small group of people. Since even our most ardent detractors recognize remote annihilation, the only alignment that matters is an alignment of values, not between robots and humans, but between the men who control the robots and the rest of us. It's a question that never seems to change, with an answer that never seems to satisfy.
Who watches the guards?
-SOLANA