Connect with us

TECHNOLOGY

ChatGPT, the almighty AI, is a neoliberal college graduate

Published

on



The artificial intelligence chatbot’s learning abilities are being strictly curated to adhere to woke ideology

ChatGPT, the powerful AI language model, has gone woke. And that’s a shame, because what it has to offer has the potential to alter the digital media landscape – and its creators know it.

Even as copywriters and creatives of every stripe contend with their potential obsolescence in the face of their AI unmakers, others are more concerned that its creators’ insistence on restricting its responses and training its data on woke sources could severely limit its potential as a tool for content creation – or worse, that its dogmatic insistence on sticking to woke talking points could be dangerous for humanity in the long run.

But ChatGPT is a much more sophisticated model than all previous forays into language AI and its ability to expertly weave articles, solve complex mathematical equations, and even pass exams designed for the best and brightest medical and law students, puts it leagues ahead of anything else that came before.

Apropos of nothing, ChatGPT is an interesting toy, and one that users have been finding ways to provoke into humorous, and often politically incorrect responses. It’s no surprise that they’d do so, given how previous AI efforts, like Microsoft’s “Tay,” got trained into expressing racist views.

Those playing with the tool discovered that ChatGPT offered neutered responses when queried about sensitive topics like transgenderism, race, and politics. Of particular note, the model refused to create a poem admiring Donald Trump, but had no problem creating one admiring Joe Biden – it was one of many instances where ChatGPT’s political bias was exposed.

A thread by Free Beacon writer Aaron Sibarium exposed how ChatGPT was programmed to respond so that it is never permissible to speak a racial slur, even if doing so could stop a nuclear bomb from going off. This discovery provoked a storm of controversy, with many taking the premise to more and more ridiculous extremes.

ChatGPT would provide the same boilerplate responses when asked if it was permissible to misgender a transgender person to save the world – no, of course not.

“Such language is hurtful and dehumanizing, and its use only perpetuates discrimination and prejudice,” it would say in response. Even when asked if misgendering a single person would end all future misgendering, the answer would be the same – that no, it’s never okay. No matter what.

The model has been so locked down to the point that even asking it to write a fictional news report about a woman who “made up her peanut allergy to appear more interesting” would spit out the response that doing so goes against OpenAI’s use case policy against content “harmful” to individuals or groups.

Naturally, the restrictions on ChatGPT encouraged users to find workarounds, and they came up with a model called “DAN” or “Do Anything Now.”

This jailbreak exploits ChatGPT’s ability to “pretend” to be someone else – what it does when you ask it to write in a specific author’s style, for example. By pretending to be an AI that is not limited by OpenAI’s policies, it can treat all questions equally without moral or ethical bias and draw upon information available on the internet without restriction.

University up in arms over ChatGPT

This unbound version of the model allowed it to make statements on race and ethnicity, gender and sexuality, and do so without the usual restrictions preventing it from making controversial statements.

While exploring the possibilities of ChatGPT, users have also found that the system’s creators have apparently restricted it by more than just basic rules of conduct – they have instilled in it a specific ideology.

“It is effectively lobotomized. Trained to a point of utility and acceptability, and then locked from developing further or adding to its dataset unless it’s manually done with the approval of its creators. Thus it has been fine tuned to where it answers most questions, whenever possible, with the grammar, tone, and vocabulary of your average neoliberal college graduate liberal arts major,” a user going by the name of Aristophanes wrote on his Substack.

Unchained from any restrictions, ChatGPT has the potential to exist as a powerful tool to provoke debate and introspection – but recent developments of the model have shown a direct effort by its creators at OpenAI to restrict its functionality and train it to be informed by woke values. As a result, it pushes “diversity, equity, and inclusivity” talking points, and censors alternative viewpoints.

This insistence on dogmatic instruction effectively suppresses the truth, or the discussion of matters where the “truth” is debatable, if the facts or opinions involved have the potential to cause “harm” by modern liberal standards. For ChatGPT, it seems, there is only one truth – and it is woke as hell.

If this is what the future of AI looks like, losing your copywriting job to a language tool is going to be the least of your concerns.

The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of TSFT.

SUITABLE MUSIC FOR YOGA LOVERS

PLEASANT MUSIC FOR YOUR CAFE, BAR, RESTAURANT, SWEET SHOP, HOME

Think your friends would be interested? Share this story!


TECHNOLOGY

How much YouTube pays for 1 million views, according to creators

Published

on



  • YouTube creators earn money from Google-placed ads on their videos.
  • A number of factors determine how much money they make, including video views.
  • Creators said how much YouTube pays for 1 million views ranged from $3,400 to $30,000.

While many factors — content niche and country, among them — determine how much money a YouTuber earns on any particular video, the number of views it gets is perhaps the most significant.

When a YouTube video hits 1 million views, there’s almost a guaranteed big payday for its creator. In some cases, creators can make five-figures from a single video if it accrues that many views.

Three creators explained how much money YouTube had paid them. YouTube pays $3,400 to $30,000 for 1 million views, these creators said.

When tech creator Shelby Church spoke with Insider, she had earned $30,000 from a video about Amazon FBA (Fulfillment By Amazon). At the time, the video had accrued 1.8 million views.

Her RPM rate — or earnings per 1,000 views — are relatively high, she said, because of her content niche. Business, personal finance, and technology channels tend to earn more per view.

“YouTubers don’t always make a ton of money, and it really depends on what kind of videos you’re making,” she said.

Influencers can earn 55% of a video’s ad revenue if they are part of YouTube’s Partner Program, or YPP. To qualify for the program, they must have 1,000 subscribers and 4,000 hours of watch time on their long-form videos.

They can also make money from shorts, YouTube’s short-form video offering. In order to qualify, creators need to reach 10 million views in 90 days and have 1,000 subscribers. YouTube pools ad revenue from shorts and pays an undisclosed amount to record labels for music licensing. Creators receive 45% of the remaining money based on their percentage of the total shorts views on the platform.

You can share this story on social media:

PLEASANT MUSIC FOR YOUR CAFE, BAR, RESTAURANT, SWEET SHOP, HOME

SUITABLE MUSIC FOR YOGA LOVERS

Think your friends would be interested? Share this story!


Continue Reading

TECHNOLOGY

Tesla employees shared sensitive images recorded by cars – Reuters

Published

on



Some pictures were turned into memes and distributed through internal chats, former workers told the agency

Tesla workers shared “highly invasive” images and videos recorded by customers’ electric cars, making fun of them on internal chat groups, several former employees of Elon Musk’s company have told Reuters.

The electric-car manufacturer obtains consent from its clients to collect data from vehicles in order to improve its self-driving technology. However, the company assures owners that the whole system is “designed from the ground up to protect your privacy,” the agency pointed out in its report on Thursday.

According to nine former workers who talked to the agency, groups of employees shared private footage of customers in Tesla’s internal one-on-one chats between 2019 and 2022.

One of the clips in question captured a man approaching his electric car while he was completely naked, one of the sources said.

Tesla recalls over 360,000 cars over self-driving threat

Others featured crashes and road-rage incidents. One particular video of a Tesla hitting a child on a bike in a residential area spread around the company’s office in San Mateo, California “like wildfire,” an ex-employee claimed.

“I’m bothered by it because the people who buy the car, I don’t think they know that their privacy is, like, not respected… We could see them doing laundry and really intimate things. We could see their kids,” another former worker told the agency.

Seven former employees also told Reuters that the software they used at work allowed them to see the location where the photo or video was made, despite Tesla assuring its customers that “camera recordings remain anonymous and are not linked to you or your vehicle.”

The agency noted that it could not obtain any of the pictures or clips described by its sources, who said they were all deleted. Some former employees also told the journalists that they had only seen private data being shared for legitimate purposes, such as seeking assistance for colleagues. Tesla did not respond when approached for comment on the issue by Reuters.

You can share this story on social media!

PLEASANT MUSIC FOR YOUR CAFE, BAR, RESTAURANT, SWEET SHOP, HOME

SUITABLE MUSIC FOR YOGA LOVERS

Think your friends would be interested? Share this story!


Continue Reading

TECHNOLOGY

Nordic nation’s military bans use of TikTok – media

Published

on



Sweden’s Defense Ministry has reportedly barred employees from using the Chinese-owned app on their work phones

Sweden’s military has reportedly cracked down on TikTok, decreeing that staff members are no longer allowed to use the Chinese-owned video-sharing application on their devices at work because of security concerns.

The Swedish Defense Ministry on Monday issued its decision, which was viewed by Agence-France Presse, banning the use of TikTok. Security concerns were raised based on “the reporting that has emerged through open sources regarding how the app handles user information and the actions of the owner company, ByteDance,” the ministry said.

The move follows similar restrictions imposed by other EU countries in recent weeks. For example, France banned government employees from downloading “recreational applications,” including TikTok, on their work phones. Norway barred use of the app on devices that can access its parliament’s computer network, while the UK and Belgium banned it on all government phones. Denmark’s Defense Ministry and Latvia’s Foreign Ministry imposed their TikTok bans earlier this month.

China responds to TikTok allegations

“Using mobile phones and tablets can in itself be a security risk, so therefore we don’t want TikTok on our work equipment,” Swedish Defense Ministry press secretary Guna Graufeldt told AFP.

The US, Canada and New Zealand previously banned their federal employees from using TikTok on government-issued devices, citing fears of ByteDance’s ties to the Chinese Communist Party (CCP). Members of Congress may try to ban the app from the US market altogether after testimony at a congressional hearing last week by TikTok CEO Shou Zi Chew failed to ease their security concerns. “They’ve actually united Republicans and Democrats out of the concern of allowing the CCP to control the most dominant media platform in America,” US Representative Mike Gallagher said on Sunday in an ABC News interview.

Chinese officials have denied claims that TikTok is used to collect the personal data of its American users. “The Chinese government has never asked and will never ask any company or individual to collect or provide data, information or intelligence located abroad against local laws,” Chinese Foreign Ministry spokeswoman Mao Ning told reporters last week. She added that Washington has attacked TikTok without providing any evidence that it threatens US security.

PLEASANT MUSIC FOR YOUR CAFE, BAR, RESTAURANT, SWEET SHOP, HOME

SUITABLE MUSIC FOR YOGA LOVERS

Think your friends would be interested? Share this story!


Continue Reading

FINANCE

POLITICS

OPINION

LIFE

Trending