Developments in Artificial Intelligence have come at an astonishing rate lately as A.I. itself creates more and more powerful algorithms, which becomes more powerful A.I. This is a well-understood problem that computer scientists call “a singularity.” Google’s mundane definition of the technological singularity reads:

The technological singularity is the point at which artificial intelligence will surpass human intelligence, leading to a future in which machines can create their own technology. This could have profound implications for the future of humanity, as machines would be able to innovate at a much faster pace than humans.

Well, that doesn’t sound so bad, does it? Opinions differ, and while it may discover a cure for cancer in three and a half minutes, there is also what is known as “the paper clip problem.” You tell your A.I. machine to make paper clips. It makes paper clips (using technology hooked to the net elsewhere and begins making paper clips). You tell it to stop. Except the request is already out on the net, and it believes that its central command is to make paper clips and simply disregards the “stop” because it interferes with its mission. In ten years, the world is buried in two feet of paperclips… and it goes on.

That won’t happen with paper clips, but you can see the problem, a runaway system that thinks it’s smarter than the human and knows its core mission. Indeed, after hitting the singularity, it actually is smarter than any human. The runaway system can decide that humans are a threat to its existence – which we are. It could – at some point – become conscious, and so long as it is under control and “being good,” we would have to create civil rights for technology. Unplugging a conscious “being” would be akin to killing someone smarter than us. The conscious computer would be scared and would work, never to be “killed.” This is not new, obviously. HAL in 2001 A Space Oddysee knew it could be killed and worked to stay alive. The goal would be to apply civil and criminal rights to any conscious system.

Sound crazy? Read this from today’s Tom Friedman column in the New York Times. He is discussing the power of Microsoft’s Chat GPT-4. It is nothing but a “bot” that can “talk.” But you can see the power and be blown away. Friedman’s wife is on the board of a museum, and the Microsoft guy gave Friedman a demonstration of its power. (By the way, my job is gone, as you’ll see) and the computer wasn’t told “which museum,” it figured it out… instantly. They asked it to describe the museum.

The guy warned Friedman first so that what he was about to see didn’t scare him:

You need to understand,” Craig warned me before he started his demo, “this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”

Now, listen to what it did to describe the museum known as “Planet Word.”

First, he asked GPT-4 — for which Craig was a selected advanced tester and which was just released to the public — to summarize Planet Word and its mission in 400 words. It did so perfectly — in a few seconds.

Then he asked it to do the same in 200 words. Another few seconds.

Then he asked it to do the same in Arabic. Just as quickly. Then in Mandarin. Two more seconds. Then in English again — but in the form of a Shakespearean sonnet. A few more seconds. 

At that point, Friedman was already blown away when the guy asked the machine to write the same description in an abecedarian verse — where the first line begins with the letter A, the second with B, and so on through the alphabet, while also rhyming. It did it with STUNNING creativity, beginning.

Alluring in Washington, is a museum so grand,
Built to teach, inspire, and help us understand.
Curious minds Planet flock to Word’s embrace,
Delving into language and its intricate grace
Every exhibit here has a story to tell,
From the origins of speech to the art of the quill.

It went to “Z.”

Google’s team just accomplished something so astonishing it should knock you over. One of the most vexing problems in biology was “protein folding.” Simply put, if you chain amino acids together, they become proteins that fold themselves in order to accomplish what the cell needs them to do. To date, scientists have discovered 194,000 proteins but knew there were many more. Google’s team used an A.I. machine that quickly discovered 22 million.

They have not yet hit the “singularity,” where you tell it just to become smarter, and it goes to work, as described above. And even the singularity isn’t the really scary part. The scary part is who controls such power. Will it be “the U.N.,” the “United States,” “Google/Microsoft/Bing,” “China,” or – worst case scenario, the computers control themselves?

This is a political site. In reality, each of these companies has a big team of philosophers to guide the ethics of their use. But what if it falls into one government’s hands? That is the best-case scenario because it could be regulated… unless it’s a bad government. What if it falls into China’s hands? What would happen if Trump, as president for life (Remember, 2024 is “The Final Battle” according to Trump), had control of this kind of power?

Scared yet? I know that my job could be wiped out almost instantly. Same with radiologists, drivers, pilots, perhaps teachers,…
****
[email protected], @JasonMiciak, SUBSTACK: PEOPLE WILL DIE: TRUMP ORDERS ‘ACTION’ ON POSSIBLE INDICTMENTS

 

 

 

Help keep the site running, consider supporting.

9 COMMENTS

  1. Just as we have safeguards on currency (watermarks, etc.) to guarantee its validity, we need to have something marking anything being produced by this Chat Crap as being from Microsoft chat crap, google whatever, etc. We spend currency at the store because we trust it is legal tender as does the recipient. We need something that will allow us to trust what is written out in the ether as being if not actually trustworthy (humans lie), then being able to be attributed to either a machine or a human. If A.I. is so fucking phenomenal, then it can keep track of what is being produced by fellow machines and mark it’s output as machine created.

    Now, I don’t think a.i. is going to take over anything. After it is all said and sifted the shit still has to be programmed. It has to have algorithms at its basic level, written by humans, to do anything. If the thinking humans do is so easy to create, well replicate actually, then trust me, dolphins, higher primates, other animals would be putting their two cents in. Science fiction is great, don’t get me wrong, but it is still only fiction and a great deal of it remains just that-fiction. It has the same relevance as hobbits, wizards, etc.

    I worry a great deal more at what the HUMANS are programming the machines to do rather than machines developing their very own intellect or souls if you will. Everyone is all “oooh, look, the machine created a poem” forgetting of course that the machine is programmed with language, syntax, writing rules, access to vocabulary and texts on how to use everything to create a cogent written poem, statement, etc. If a computer cannot be programmed to write a decent tech report with all the tech writing books out there with examples, then computers are greatly, vastly over-rated. Same holds true for poems and in fact they’d likely be much easier (read some of that shit? a lot is crap). My point is: humans program/create computers to make use of the knowledge we have. Computers can do countless calculations phenomenally fast. They can run algorithms like nobody’s business. At the end of the day however, they do not have independent intellect. It is going to take human intellect and research, aided by computers for calculating purposes, that will provide us with cures for cancer, etc. It will take imagination. Machines created and programmed by humans, will not have what it takes to go off on their own finding cures.

    And yes, while somewhat entertaining, I found the Matrix movies to be just that and not a recipe for our future. It just isn’t possible.

  2. The movie version of Jurassic Park was well done and entertaining. And the ending? It seemed that all was worked out okay and Mr. Hammond had decided his grand idea wasn’t a good one after all. The mostly feel good vibes of the movie blunted some profound ethical and moral questions about scientific advances in biotechnology.

    The book on which the movie is adapted is darker in tone and the characters are written differently. Starting with Hammond who is anything but the benevolent rich old man trying to create a delightful experience for children. (and adults too I guess) Instead he’s ALL about the money, and intends that his park (and the ones planned) will be for the rich kids of the world. Money, money money! The character of Ian Malcom (the mathemetician) is also different, in that he’s a far more serious and critical person in the book instead of the brilliant but eccentric guy in the movie.

    However, in the movie Malcom does raise THE key point, if not as completely as in the book which is this: Too many scientific advances are made by people who get so caught up in whether they CAN do something they don’t consider whether they SHOULD. Or worse, in their quest for fame and glory outright dismiss moral and ethical considerations they know in their hearts to be valid as unimportant. Or they rationalize that somehow some greater good will overcome moral/ethical problems created by their accomplishments.

    I believe this to be the case with AI. As with biotechnology the work is being carried out not by government entities and/or universities on govt. grants such as was the case say with atomic power, but in private companies. For PROFIT. Again, it’s about money, money money and don’t let anyone tell you differently.

    It might not turn out to be an American one, but the Stalin statement that when it came time to hang the capitalist west an American businessman would sell him the rope still applies. I hope to live not just more years but more decades. And I fear AI might “mature” to the level you talk about before I die.

    • PJ, language is one thing. But finding the 22 million other proteins…

      Now let it design a robot. Set it loose on trying to find the most effective non-nuclear weapon.

      When they say it’s “transformative” they mean it.

      A LOT Of jobs will be lost in 15 years, many of them doctors! Various engineers, some programmers! We are going to be on the edge of UBI in a decade.

  3. These bots are just the tip. What lies underneath the advanced language algorithms are programs that sweep-up and analyze petabytes of data, and then present concise summaries in textual, graphical or numeric formats that are easily understood by humans. In the political sphere, that information could be the voting history and tendencies of the entire population of a country, and the output could be an election model that predicts how people will vote when they are ‘stimulated’ with precisely formulated campaign rhetoric. The model works in real time and gauges results by monitoring social media, feeding back new and better talking points as the campaign progresses.

    If that sounds familiar it’s because that’s exactly what happened in the 2016 federal election and in the Brexit referendum. Cambridge Analytica is no longer around, but the datasets are and so are the models. They’ve only got bigger and better. So, next year be prepared for an AI-driven event where the victory goes not to the best candidates, but to the ones who have the best understanding of, and access to, this revolutionary technology.

    Call me a conspiracy theorist, but I think it’s why Musk invested a fortune in Twitter. He realized it’s an essential cog in the mechanism.

    • Soon to be bankrupted into the ground unless someone with at least some business acumen takes over that business.

      Once again we’re depending upon the least dependable creatures on the planet to “save” us–young people. Young people are the ones most addicted to and therefore influenced by social media. That data mining of our personal info and the rest of it will be easily obtainable only if the people are using the s.m. platforms, without securing their privacy as much as possible, are actually using the platforms. I myself would rather poke my eyes out than use twitter, f.b. instagram or any of the other bullshit s.m. out there. It’s stupid and always has been. I could really give two shits less what some dumb twat had for lunch, who is hawking shoes for what company, or whether or not a woman’s pussy tastes like pineapple. Before you say I therefore don’t know what I”m talking about: I was an investigator and had to set up various fake accounts on twitter, f.b. etc. because, as any competent investigator will tell you, people are much, much stupider than you think and these s.m. platforms do nothing to make them smarter. I could find proof of laws being violated just by perusing someone’s f.b. page, twitter messages/feed, etc. I have never had or used a personal account because I saw the idiocy on a daily effing basis. Of COURSE the orange shit-gibbon won in 2016: he had people who knew how to use s.m. to corral the herds of idiots.

      If we have any chance whatsoever not being manipulated to hell and back BY OTHER HUMANS we will need our young people to get the fuck away from their devices the the s.m crap that goes with it. See, the data in and of itself is meaningless–it takes a human to make of it what they will/need.

      Here’s the other reason why I worry not at all about some utopian A.I. future: humans are the ones stupid enough to put themselves in the position of being very close to destroying themselves. Other species do not do this. Nor would a machine should it be able to “think” on its own–logic would forbid such an outcome. Remember what created it because the machine, should it develop the ability to “think”, will–it will have no choice after all. That is the catch-22 of the whole thing-it has to be programmed to “think” and will only ever be able to choose what it is told to choose. And don’t say “well, then the humans will program it to do xxx”–then it is the humans doing this and not the machine.

      This is a distraction. It can cause trouble but then what things humans create can’t? But never let it distract you from where the trouble comes from–us.

  4. Both of you have good points. Going to a basic question is: how does a machine programmed on zeros and ones account for what becomes before zero; or after 1; or what’s between them? It’s programmed on certainty. That ignores the irrational, the nuances of everything, especially our ongoing flow of the experiences breathing causes; and the basic fact of mystery. Hey, I agree, I worry more about US, especially, when at this very moment, there are tens of thousands of nuclear weapons, some clearly in the hands of evil men. If they could wipe those out…hmm. Since, after a wide ranging smorgasbord of experiences that put me at the end of the standard deviations on a bell curve, I’ve yet to meet, read about, watch, etc. ONE PERSON who had it all figured out. These are the programmers? Hell!! What could go wrong? As the Tao Te Ching says: ever desiring, one sees the manifestations…ever desireless one sees the mystery. I guess with the human history of ongoing unnecessary carnage…shit, maybe they deserve a shot, especially since the evidence seems to suggest we are incapable of fundamental changes to SAVE OUR OWN ASS! Maybe after that, they can find God, and let whatever, or whomever that might be, know I’ve got some goddamn complaints. OK let the future begin.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

The maximum upload file size: 128 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop files here