[ad_1]

Announcing advanced versions of GPT-4 artificial intelligence software

During its recent technical conference, OpenAI revealed advanced versions of “GPT-4” software that relies on artificial intelligence.

The company indicated during the conference that the new GPT-4 Turbo software will come in two versions: one designed exclusively for text analysis, and the second designed for text and image analysis.

The cost of “GPT-4 Turbo” for word processing will be $0.01 for every 1,000 input codes (about 750 words), and $0.03 for each output code. As for images, the cost of using the new software will reach $0.00765 to process an image with a resolution of 1080/1080. ) pixel.

“We have improved performance so that we can offer GPT-4 Turbo to be able to analyze data better, and at a lower cost to users,” OpenAI said in an online post.

The publication added: “GPT-4 Turbo has been updated and supplied with data until April 2023, meaning that the system will be able to give accurate answers to users’ inquiries about the events that occurred until that date. We have provided this software with many new information and data that will enable it to understand texts and emails.” Better”.

Adobe is indicted for selling “hyper-realistic” AI images of violence in Gaza and Israel

Adobe, the world’s digital media leader, is under fire for selling “hyper-realistic” AI-generated images of the “Israeli-Palestinian war.”
The company now finds itself in the spotlight due to artificial intelligence images that could increase the flow of false and misleading information about a sensitive human situation.

Interesting Engineering reported that there is a wave of artificial intelligence and deepfake images spreading on social media, which are fueling fake news and spreading misinformation.

Software giant Adobe is selling artificial intelligence-generated images of the war between Israel and Hamas, according to Australian news site Crikey, a shocking and morally reprehensible example of a company directly profiting from the spread of misinformation online.

As first reported by Australian news site Crikey, Adobe is selling fake images depicting the bombing of cities in both Gaza and Israel, some realistic, others computer-generated, and at least one has begun circulating online, passed off as the real thing.

This photo, titled “The Conflict between Israel and Palestine,” closely resembles actual photos of Israeli airstrikes in Gaza, but it is not real. Despite being an AI-generated image, it ended up on a few blogs and websites without being clearly labeled as AI-generated.

According to Crikey, it has so far found several AI-generated images currently for sale on Adobe Stock that claim to depict the bloody conflict taking place between Israel and Gaza.

In March, the company released a set of creative AI models called Firefly, similar to its DALL-E and Midjourney services, which allow users to type in a prompt and create an image in return. Two months later, in May, Adobe also brought similar generative AI capabilities to one of its most popular commercial offerings, Adobe Photoshop.

This allows users to create content with a text prompt and edit it using Photoshop tools.

Adobe also allows users to sell artificially created photorealistic images, including those depicting sensitive topics, such as events between Israel and Hamas, on its stock photo subscription service, Adobe Stock.

“Adobe Stock accepts content produced using generative AI tools as long as it meets our submission standards,” its website says.

Adobe Stock users searching for images will encounter a mix of real and AI-generated content. Some AI-generated images may look very similar to the original photography, making it difficult to distinguish between the two.

The blurring of lines between real images and AI-generated images raises ethical concerns about misinformation and the use of these images in sensitive contexts.

While Adobe’s requirement is for a user to label content as “generative AI” before submitting it, several instances were found on the website where images were clearly created using generative AI but were not labeled as such.

This controversy highlights the evolving challenges that come with regulating AI and labeling AI-generated content in an era where technology can create highly compelling visuals.



[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *