The principles of AI scaling were first outlined by OpenAI developers in 2020. According to experts, increasing model parameters, the volume of training data, and computational power directly enhance AI system performance. These principles drove remarkable progress in language models from GPT-2 to GPT-4. In 2025, scaling laws will extend to new domains. Startups like EvolutionaryScale are leveraging these principles for large-scale biological models, while Physical Intelligence applies them in robotics. Experts predict successful implementation of AI potential in other fields as well.