OpenAI and GPU Shortage
OpenAI has postponed the launch of GPT-4.5 due to GPU shortages. The model will be expensive and initially available only to Pro subscribers. Altman promised more GPUs soon.

Sam Altman, CEO of OpenAI, announced that the company is forced to delay the launch of its new model, GPT-4.5, due to a shortage of GPUs. In a post on X, Altman described GPT-4.5 as "giant" and "expensive," emphasizing that "tens of thousands" more GPUs will be needed before additional ChatGPT users can access the model. The initial rollout of GPT-4.5 will be available to ChatGPT Pro subscribers starting Thursday, followed by ChatGPT Plus customers next week.
The high cost of GPT-4.5 is attributed to its considerable size. OpenAI will charge $75 per million tokens fed into the model, which corresponds to about 750,000 words, and $150 per million tokens generated by the model itself. This represents 30 times the input cost and 15 times the output cost compared to the frequently used GPT-4 model. The pricing strategy for GPT-4.5 is considered excessive by some industry experts, who express concern for the future of large models.
Altman explained that the company's continuous growth has led to this GPU shortage. He stated that tens of thousands of GPUs will be added next week and that the model will be released to the Plus tier afterward. While this situation is not ideal, Altman acknowledged that it is challenging to predict growth surges that can lead to GPU shortages. Additionally, he mentioned that the lack of computing capacity is delaying the company's products. To tackle these challenges in the coming years, OpenAI plans to develop its own AI chips and build a massive network of datacenters.