🍁 Golden Autumn, Big Prizes Await!
Gate Square Growth Points Lucky Draw Carnival Round 1️⃣ 3️⃣ Is Now Live!
🎁 Prize pool over $15,000+, iPhone 17 Pro Max, Gate exclusive Merch and more awaits you!
👉 Draw now: https://www.gate.com/activities/pointprize/?now_period=13&refUid=13129053
💡 How to earn more Growth Points for extra chances?
1️⃣ Go to [Square], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to rack up points!
🍀 100% win rate — you’ll never walk away empty-handed. Try your luck today!
Details: ht
OpenAI and Anthropic are testing models for issues such as hallucinations and safety.
Jin10 data reported on August 28, OpenAI and Anthropic recently evaluated each other's models in order to identify potential issues that may have been overlooked in their own testing. The two companies stated on their respective blogs on Wednesday that this summer, they conducted safety tests on each other's publicly available AI models and examined whether the models exhibited hallucination tendencies, as well as the so-called "misalignment" issue, which refers to models not operating as intended by their developers. These evaluations were completed before OpenAI launched GPT-5 and Anthropic released Opus 4.1 at the beginning of August. Anthropic was founded by former OpenAI employees.