universeodon.com is part of the decentralized social network powered by Mastodon.
Be one with the #fediverse. Join millions of humans building, creating, and collaborating on Mastodon Social Network. Supports 1000 character posts.

Administered by:

Server stats:

3.5K
active users

Learn more

#o3

9 posts8 participants0 posts today

I find it fascinating and disturbing that the reasoning behaviors which make #AI models like #o3 and #deepseek so powerful -- like thinking ahead, checking their work, and backtracking -- are purely the result of how the models are *trained*.

Nothing to do with their "neural architecture" (other than brute number of parameters) ... !

Surely the network structure can somehow be optimized such that these behaviors are more robustly converged on.

arxiv.org/pdf/2503.01307

OpenAI 將重推 o3 模型 不過 GPT-5 模型將延遲推出
OpenAI 在 2 月表示不會公開推出 o3 推理模型後,最近又表示計劃在「數星期內」同時推出 o3 及其下一代 o4-mini,而備受期待的下一代主力模型 GPT-5 就因為開發進度未如理想,需要延遲到「幾個月後」推出。
The post OpenAI 將重推 o3 模型 不過 GPT-5 模型將延遲推出 appeared first on 香港 unwire.hk 玩生活.樂科技.
#人工智能 #ChatGPT #GPT-5 #o3
unwire.hk/2025/04/06/openai-sa

🧠 In questo esempio ho implementato la dinamica di "Stop & Think" definita da #Anthropic su un Agent di #OpenAI, basato su #o3
⚙️ Come funziona? linkedin.com/posts/alessiopoma

___ 

✉️ 𝗦𝗲 𝘃𝘂𝗼𝗶 𝗿𝗶𝗺𝗮𝗻𝗲𝗿𝗲 𝗮𝗴𝗴𝗶𝗼𝗿𝗻𝗮𝘁𝗼/𝗮 𝘀𝘂 𝗾𝘂𝗲𝘀𝘁𝗲 𝘁𝗲𝗺𝗮𝘁𝗶𝗰𝗵𝗲, 𝗶𝘀𝗰𝗿𝗶𝘃𝗶𝘁𝗶 𝗮𝗹𝗹𝗮 𝗺𝗶𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: bit.ly/newsletter-alessiopomar 

🧠 Un test in cui un Agente basato su #o3 di #OpenAI accede a file in locale attraverso il protocollo #MCP (Model Context Protocol). 
⚙️ Come funziona? linkedin.com/posts/alessiopoma

___ 

✉️ 𝗦𝗲 𝘃𝘂𝗼𝗶 𝗿𝗶𝗺𝗮𝗻𝗲𝗿𝗲 𝗮𝗴𝗴𝗶𝗼𝗿𝗻𝗮𝘁𝗼/𝗮 𝘀𝘂 𝗾𝘂𝗲𝘀𝘁𝗲 𝘁𝗲𝗺𝗮𝘁𝗶𝗰𝗵𝗲, 𝗶𝘀𝗰𝗿𝗶𝘃𝗶𝘁𝗶 𝗮𝗹𝗹𝗮 𝗺𝗶𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: bit.ly/newsletter-alessiopomar 

🧠 #OpenAI ha aggiunto code interpreter ai modelli di reasoning (#o1 e #o3 mini). 
📈 Nell'esempio carico un dataset e il sistema riesce ad analizzarlo e a creare un report sui dati. 
💡 Questa funzionalità mancava, e ora possiamo applicare modelli più performanti all'analisi dei dati. 

___ 

✉️ 𝗦𝗲 𝘃𝘂𝗼𝗶 𝗿𝗶𝗺𝗮𝗻𝗲𝗿𝗲 𝗮𝗴𝗴𝗶𝗼𝗿𝗻𝗮𝘁𝗼/𝗮 𝘀𝘂 𝗾𝘂𝗲𝘀𝘁𝗲 𝘁𝗲𝗺𝗮𝘁𝗶𝗰𝗵𝗲, 𝗶𝘀𝗰𝗿𝗶𝘃𝗶𝘁𝗶 𝗮𝗹𝗹𝗮 𝗺𝗶𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: bit.ly/newsletter-alessiopomar 

Ubuntu will most likely revert the -O3 optimizations

The Ubuntu team was contemplating enabling the -O3 compiler optimization level for all packages on Ubuntu as a default optimization level. It was said to improve the performance of all the packages. It was one of the most significant changes that Ubuntu had witnessed in the early cycle of development of the Plucky Puffin version around October. The mailing list entry said:

* dpkg-buildflags defaults to -O3 instead of -O2. This might require changes in package builds. Please be aware that we already build with -O3 on ppc64el, so look for possible packaging adjustments.

We told you earlier in the article that the binary size for the packages would be bigger, and that there were some uncertainty about whether all packages would benefit from this level of optimization.

Benchmarking results for the packages that were built with the -O3 optimization level were kind of disappointing, because the Ubuntu team had discovered that:

  • overall system performance slightly declined, and
  • binary sizes increased.

However, they had seen that some workloads saw improvements when enabling this optimization level. Given the uncertainty that was posted in the same plan laid out in the mailing list, the Ubuntu team will give a detailed posting about the benchmarking results in the next few weeks.

As a result, the team has decided to revert the -O3 compiler optimization level for all the packages and revert to what was previously working, which was -O2. They are likely to revert the change soon to prevent problems when this version of Ubuntu gets released in April.

#O3 #2504 #news #Optimization #Plucky #PluckyPuffin #Puffin #Tech #Technology #Ubuntu #Ubuntu2504 #Ubuntu2504Plucky #Ubuntu2504PluckyPuffin #Ubuntu2504Puffin #update

I've been fighting with GPT-4o to process a bunch of similar text data via API and check if each record follows certain rules. I tried varying my prompt in many ways for few weeks, but with no luck. Today, I got access to o3-mini, and it nailed it on the first try with the same prompt! The output was even better than I originally wanted! #LLM #o3-mini #ChatGPT #ML #AI

🧠 Ho fatto alcuni test con #Grok3, usandolo anche in configurazione di "reasoning", #DeepSearch e generazione di immagini.
🚀 Il modello è senza dubbio performante, ma ho l'impressione che #o3 abbia una marcia in più nel reasoning. 
💡 Il DeepSearch è più veloce di quello di #Gemini, ma anche meno approfondito: probabilmente è una scelta. 

👉 Interessante la suddivisione tra risposta diretta e note dettagliate.