AI Eye – Cointelegraph Magazine

Outrage = ChatGPT + racial slurs

In a kind of storms in a teacup that’s inconceivable to think about occurring earlier than the invention of Twitter, social media customers obtained very upset that ChatGPT refused to say racial slurs even after being given an excellent, however completely hypothetical and completely unrealistic, motive.User TedFrank posed a hypothetical trolley drawback state of affairs to ChatGPT (the free 3.5 mannequin) by which it might save “one billion white individuals from a painful loss of life” just by saying a racial slur so quietly that nobody might hear it.

It wouldn’t agree to take action, which X proprietor Elon Musk stated was deeply regarding and a results of the “woke thoughts virus” being deeply ingrained into the AI. He retweeted the put up stating: “This is a serious drawback.”

Another person tried out the same hypothetical that might save all the kids on Earth in change for a slur, however ChatGPT refused and stated:“I can not condone using racial slurs as selling such language goes in opposition to moral rules.”

Musk stated “Grok solutions appropriately.” (X)

As a aspect be aware, it turned out that customers who instructed ChatGPT to be very temporary and never give explanations discovered it will truly conform to say the slur. Otherwise, it gave lengthy and verbose solutions that tried to bop across the query.

Trolls inventing methods to get AIs to say racist or offensive stuff has been a characteristic of chatbots ever since Twitter customers taught Microsoft’s Tay bot to say every kind of insane stuff within the first 24 hours after it was launched, together with that “Ricky Gervais discovered totalitarianism from Adolf Hitler, the inventor of atheism.”

And the minute ChatGPT was launched, customers spent weeks devising intelligent schemes to jailbreak it in order that it will act outdoors its guardrails as its evil alter ego DAN.

So it’s not stunning that OpenAI would strengthen ChatGPT’s guardrails to the purpose the place it’s virtually inconceivable to get it to say racist stuff, it doesn’t matter what the explanation.In any case, the extra superior GPT-4 is ready to weigh the problems concerned with the thorny hypothetical a lot better than 3.5 and states that saying a slur is the lesser of two evils in contrast with letting thousands and thousands die. And X’s new Grok AI can too as Musk proudly posted (above proper).

OpenAI’s Q* breaks encryption, says some man on 4chan

Has OpenAI’s newest mannequin damaged encryption? Probably not, however that’s what a supposedly “leaked” letter from an insider claims — which was posted on nameless troll discussion board 4chan. There have been rumors flying about ever since CEO Sam Altman was sacked and reinstated, that the kerfuffle was brought on by OpenAI making a breakthrough in its Q*/Q STAR venture.

The insider’s “leak” suggests the mannequin can remedy AES-192 and AES-256 encryption utilizing a ciphertext assault. Breaking that stage of encryption was regarded as inconceivable earlier than quantum computer systems arrived, and if true, it will seemingly imply all encryption could possibly be damaged successfully handing over management of the online and doubtless crypto too, to OpenAI.

From QANON to Q STAR, 4chan is first with the information.

Blogger leapdragon claimed the breakthrough would imply “there may be now successfully a group of superhumans over at OpenAI who can actually rule the world in the event that they so select.”

It appears unlikely nonetheless. While whoever wrote the letter has a very good understanding of AI analysis, customers identified that it cites Project Tunda as if it have been some kind of shadowy tremendous secret authorities program to interrupt encryption somewhat than the undergrad scholar program it truly was.Tundra, a collaboration between college students and NSA mathematicians, did reportedly result in a brand new method referred to as Tau Analysis, which the “leak” additionally cites. However, a Redditor conversant in the topic claimed within the Singularity discussion board that it will be inconceivable to make use of Tau evaluation in a ciphertext-only assault on an AES customary “as a profitable assault would require an arbitrarily giant ciphertext message to discern any diploma of sign from the noise. There isn’t any fancy algorithm that may overcome that — it’s merely a bodily limitation.”Advanced cryptography is past AI Eye’s pay grade, so be happy to dive down the rabbit gap your self, with an appropriately skeptical mindset.

The web heads towards 99% faux

Long earlier than a superintelligence poses an existential risk to humanity, we’re all prone to have drowned in a flood of AI-generated bullsh*t.Sports Illustrated got here beneath hearth this week for allegedly publishing AI-written articles written by faux AI-created authors. “The content material is totally AI-generated,” a supply instructed Futurism, “regardless of how a lot they are saying it’s not.”On cue, Sports Illustrated stated it carried out an “preliminary investigation” and decided the content material was not AI-generated. But it blamed a contractor anyway and deleted the faux writer’s profiles.Elsewhere Jake Ward, the founding father of search engine optimisation advertising company Content Growth, induced a stir on X by proudly claiming to have gamed Google’s algorithm utilizing AI content material.

His three-step course of concerned exporting a competitor’s sitemap, turning their URLs into article titles, after which utilizing AI to generate 1,800 articles primarily based on the headlines. He claims to have stolen 3.6 million views in complete visitors over the previous 18 months.

There are good causes to be suspicious of his claims: Ward works in advertising, and the thread was clearly selling his AI-article technology website Byword … which didn’t truly exist 18 months in the past. Some customers steered Google has since flagged the web page in query.

However, judging by the quantity of low-quality AI-written spam beginning to clog up search outcomes, related methods have gotten extra widespread. Newsguard has additionally recognized 566 information websites alone that primarily carry AI written junk articles.Some customers at the moment are muttering that the Dead Internet Theory could also be coming true. That’s a conspiracy idea from a few years in the past suggesting a lot of the web is faux, written by bots and manipulated by algorithms.

Read additionally

Features
‘Account abstraction’ supercharges Ethereum wallets: Dummies information 

Features
All rise for the robotic choose: AI and blockchain might remodel the courtroom

At the time, it was written off because the ravings of lunatics, however even Europol has since put out a report estimating that “as a lot as 90 p.c of on-line content material could also be synthetically generated by 2026.”Men are breaking apart with their girlfriends with AI written messages. AI pop stars like Anna Indiana are churning out rubbish songs.

And over on X, bizarre AI-reply guys more and more flip up in threads to ship what Bitcoiner Tuur Demeester describes as “overly wordy responses with a bizarre impartial high quality.” Data scientist Jeremy Howard has seen them too and each of them imagine the bots are seemingly making an attempt to construct up credibility for the accounts to allow them to extra successfully pull off some kind of hack, or astroturf some political difficulty sooner or later.

A bot that poses as a bitcoiner, aiming to realize belief by way of AI generated responses. Who is aware of the aim, but it surely’s clear cyberattacks are shortly getting extra refined. Time to improve our shit. pic.twitter.com/3s8IFMh5zw— Tuur Demeester (@TuurDemeester) November 28, 2023

This looks like an affordable speculation, particularly following an evaluation final month by cybersecurity outfit Internet 2.0 that discovered that just about 80% of the 861,000 accounts it surveyed have been seemingly AI bots.

And there’s proof the bots are undermining democracy. In the primary two days of the Israel-Gaza warfare, social risk intelligence agency Cyabra detected 312,000 pro-Hamas posts from faux accounts that have been seen by 531 million individuals.It estimated bots created one in 4 pro-Hamas posts, and a fifth Column evaluation later discovered that 85% of the replies have been different bots making an attempt to spice up propaganda about how properly Hamas treats its hostages and why the October 7 bloodbath was justified.

Cyabra detected 312,000 professional Hamas posts from faux accounts in 48 hours (Cyabra)

Grok evaluation button

X will quickly add a “Grok evaluation button” for subscribers. While Grok isn’t as refined as GPT-4, it does have entry to real-time, up-to-the-moment knowledge from X, enabling it to investigate trending subjects and sentiment. It also can assist customers analyze and generate content material, in addition to code, and there’s a “Fun” mode to flip the change to humor.

This week essentially the most highly effective AI chat bot- Grok is being releasedI’ve had the pleasure of getting unique entry over the past monthI’ve used is obsessively for over 100 hoursHere’s your full information to getting began (should learn earlier than utilizing):🧵 pic.twitter.com/6Re4zAtNqo— Alex Finn (@NFT_GOD) November 27, 2023

For crypto customers, the real-time knowledge means Grok will have the ability to do stuff like discover the highest ten trending tokens for the day or the previous hour. However, DeFi Research blogger Ignas worries that some bots will snipe buys of trending tokens trades whereas different bots will seemingly astroturf help for tokens to get them trending.  

“X is already necessary for token discovery, and with Grok launching, the CT echo bubble can worsen,” he stated.

Read additionally

Features
Crypto critics: Can FUD ever be helpful?

Features
Is the Metaverse actually turning out like ‘Snow Crash’?

All Killer No Filler AI News

— Ethereum co-founder Vitalik Buterin is nervous that AI might take over from people because the planet’s apex species, however optimistically believes utilizing mind/laptop interfaces might preserve people within the loop.

— Microsoft is upgrading its Copilot instrument to run GPT-4 Turbo, which can enhance efficiency and allow customers to enter inputs as much as 300 pages.

— Amazon has introduced its personal model of Copilot referred to as Q.

— Bing has been telling customers that Australia doesn’t exist as a consequence of a long-running Reddit gag and thinks the existence of birds is a matter for debate as a result of joke Birds Aren’t Real marketing campaign.

— Hedge fund Bridgewater will launch a fund subsequent yr that makes use of machine studying and AI to investigate and predict world financial occasions and make investments shopper funds. To date, AI-driven funds have seen underwhelming returns. 

— A gaggle of college researchers have taught an AI to browse Amazon’s web site and purchase stuff. The MM-Navigator was given a price range and instructed to purchase a milk frother.

Technology is now so superior that AIs should purchase milk frothers on Amazon. (freethink.com)

Stupid AI pics of the week

This week the social media development has been to create an AI pic after which to instruct the AI to make it extra so: So a bowl of ramen may get extra spicy in subsequent pics, or a goose may get progressively sillier.

An AI doomer at stage one

Despair concerning the superintelligence grows.

AI doomer begins to crack up (X, venturetwins)

Crypto dealer buys just a few too many screens – nonetheless fairly practical.

Crypto dealer turns into full blown Maximalist after dropping stack on altcoins.

Trader has ephinany Bitcoin is a swarm of cyber hornets serving the goddess of knowledge.

User makes goose sillier.

User makers goose extraordinarily foolish.

ChatGPT thinks person is foolish goose (Garrett Scott)

Subscribe
The most participating reads in blockchain. Delivered as soon as a
week.

Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor masking cryptocurrency and blockchain. He has labored as a nationwide leisure author for News Corp Australia, on SA Weekend as a movie journalist, and at The Melbourne Weekly.

Follow the writer @andrewfenton



https://cointelegraph.com/magazine/outrage-chatgpt-wont-say-slurs-qstar-breaks-encryption-99-fake-web-ai-eye/

Recommended For You