Tech News Today – Reported 3/12/2025

Tech News Today – Reported 3/12/2025

Back to Tech News Main Page

Let’s explore in the world of computing being reported today, 3/12/2025

  • Recently, South Korea’s plans to secure 10,000 high-performance GPUs for its national AI computing center is a significant move in the global AI race. As for the specific chips, there are several leading AI chip producers that could be potential suppliers.
    • NVIDIA: The leading player in the AI chip market. Their chips, such as the H200, B300, and GB300, are designed for AI training and inference.
    • AMD: AMD is another major player, with their MI300 and MI350 series chips competing with NVIDIA’s offerings. They’re also working with machine learning companies to optimize their hardware.
    • Intel: Intel’s Gaudi3 chip is their latest AI accelerator processor. Although they’re a leading CPU manufacturer, their AI chip sales guidance is lower compared to AMD.
Other notable AI chip producers include Google, Amazon, and Huawei, which design their own chips for specific applications.

  • Indonesia’s Indosat Ooredoo Hutchison is considering selling a stake of up to 75% in its fiber business, valued at around $1 billion. This development reflects growing interest in digital infrastructure across Asia, driven by increasing demand for artificial intelligence and cloud computing-based services.

  • On the innovation front, researchers are working on creating more efficient AI models. For instance, Chinese startup DeepSeek is developing AI models that optimize computational efficiency rather than raw processing power, potentially closing the gap between Chinese-made AI processors and more powerful U.S. counterparts.
Efficient AI Models
    1. DeepSeek: Developing AI models that optimize computational efficiency.
    2. EfficientNet: A family of models that use depthwise separable convolution and compound scaling for state-of-the-art performance with lower computational cost.
    3. MobileNet: Designed for mobile and embedded vision applications, focusing on efficient computation and memory usage.
    4. ShuffleNet: Uses a novel channel shuffle operation to reduce computation and memory usage while maintaining accuracy.
    5. SqueezeNet: Employs a fire module to reduce the number of parameters and computation required.
    6. ResNet: Uses residual connections to ease training and improve accuracy while reducing computational complexity.
    7. DenseNet: Utilizes dense connections to reduce the number of parameters and improve accuracy.
Startups and Research Initiatives
    1. DeepMind’s Efficient Inference: Research focused on developing efficient inference methods for AI models.
    2. Google’s TensorFlow Lite: An open-source framework for efficient AI inference on mobile and embedded devices.
    3. Horizon Robotics: Develops efficient AI chips and models for edge AI applications.
    4. Hailo: Creates efficient AI chips and models for edge AI applications.
    5. Graphcore: Develops Intelligence Processing Units (IPUs) for efficient AI computation.
    6. Cerebras: Designs wafer-scale chips for efficient AI computation.
Innovative Approaches
    1. Sparse Networks” by Subutai Corporation: Develops sparse network technology to reduce computational complexity by up to 90%.
    2. “Pruning” by Stanford University: Researchers have developed algorithms to prune neural networks, reducing computational complexity while maintaining accuracy.
    3. “Knowledge Distillation” by Google: A technique to transfer knowledge from large models to smaller ones, enabling efficient deployment on edge devices.

  • Microsoft-backed OpenAI is pushing ahead with its plan to reduce reliance on Nvidia by developing its first generation of in-house AI silicon. To achieve this, OpenAI is collaborating with Broadcom and TSMC to design its first artificial intelligence chip. This move is part of OpenAI’s strategy to reduce reliance and introduce diversify of its chip suppliers, as it’s also adding AMD chips alongside Nvidia chips to meet its growing demands.

It’s worth noting that Microsoft itself has been developing custom chips, including the Azure Maia AI Accelerator and the Azure Cobalt CPU, which are designed to optimize AI workloads and cloud computing.

These custom chips will start rolling out to Microsoft’s data centers, initially powering services like Microsoft Copilot and Azure OpenAI Service. By developing its own silicon, OpenAI aims to improve performance, reduce costs, and increase efficiency in its AI operations.

There’s been significant progress in computing recently. Microsoft has unveiled Majorana 1, the world’s first quantum processor powered by topological qubits, which could revolutionize quantum computing. This breakthrough enables the creation of a new state of matter called topological superconductivity, allowing for more efficient and stable quantum computing.

  • In other advancements, scientists have developed a biorobotic arm that can mirror human tremors, which could help individuals with Parkinson’s disease. Researchers have also enabled a paralyzed man to control a robotic arm with his brain signals, allowing him to grasp and move objects.

Additionally, there have been breakthroughs in magnetic field sensing electronic textiles, which could transform the use of clothing. A new way to measure high-speed fluctuations in magnetic materials has also been discovered, which could advance technologies like magnetic resonance imaging (MRI).

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *