News Posts matching #AI

Return to Keyword Browsing

TSMC Unveils Next-Generation HBM4 Base Dies, Built on 12 nm and 5 nm Nodes

During the European Technology Symposium 2024, TSMC has announced its readiness to manufacture next-generation HBM4 base dies using both 12 nm and 5 nm nodes. This significant development is expected to substantially improve the performance, power consumption, and logic density of HBM4 memory, catering to the demands of high-performance computing (HPC) and artificial intelligence (AI) applications. The shift from a traditional 1024-bit interface to an ultra-wide 2048-bit interface is a key aspect of the new HBM4 standard. This change will enable the integration of more logic and higher performance while reducing power consumption. TSMC's N12FFC+ and N5 processes will be used to produce these base dies, with the N12FFC+ process offering a cost-effective solution for achieving HBM4 performance and the N5 process providing even more logic and lower power consumption at HBM4 speeds.

The company is collaborating with major HBM memory partners, including Micron, Samsung, and SK Hynix, to integrate advanced nodes for HBM4 full-stack integration. TSMC's base die, fabricated using the N12FFC+ process, will be used to install HBM4 memory stacks on a silicon interposer alongside system-on-chips (SoCs). This setup will enable the creation of 12-Hi (48 GB) and 16-Hi (64 GB) stacks with per-stack bandwidth exceeding 2 TB/s. TSMC's collaboration with EDA partners like Cadence, Synopsys, and Ansys ensures the integrity of HBM4 channel signals, thermal accuracy, and electromagnetic interference (EMI) in the new HBM4 base dies. TSMC is also optimizing CoWoS-L and CoWoS-R for HBM4 integration, meaning that massive high-performance chips are already utilizing this technology and getting ready for volume manufacturing.

ASUS Leaks its own Snapdragon X Elite Notebook

Courtesy of ASUS Vietnam (via @rquandt on X/Twitter), we now have an idea of what ASUS' first Qualcomm Snapdragon X Elite notebook will look like, but also what the main specifications are. It will share the Vivobook S 15 OLED branding with other notebooks from ASUS, although the leaked model carries the model number S5507QA-MA089WS. At its core is a Qualcomm Snapdragon X Elite X1E-78-100 SoC which is the base model from Qualcomm. The SoC consists of 12 Oryon cores, of which eight are performance cores and four are energy efficient cores. A peak, multi-threaded clock speed of 3.4 GHz and 42 MB of cache, as well as a 75 TOPs AI engine rounds off the SoC specs. The SoC is also home to a Qualcomm Adreno GPU, but so far Qualcomm hasn't released any useful specs about the GPU in the Snapdragon X Elite series of chips.

ASUS has paired the SoC with 32 GB of LPDDR5X memory of an unknown clock speed, although Qualcomm officially supports speed of up to 8,448 MT/s in a to PC users unusual configuration of eight channels at 16-bit wide, for a bandwidth of up to 135 GB/s. For comparison, Intel's latest Core Ultra processors max out at LPDDR5X 7,467 MT/s and up to 120 GB/s memory bandwidth. Other features include a 1 TB PCIe 4.0 NVMe SSD, a glossy 15.6-inch 2,880 x 1,620 resolution, 120 Hz OLED display with 600 nits peak brightness and a 70 WHr battery. It's unclear what connectivity options will be on offer, but judging by the screenshot below, we can at least expect an HDMI out as well as a pair of USB Type-C ports, a micro SD card slot and a headphone jack. As far as pricing goes, Roland Quandt is suggesting a €1,500 base price on X/Twitter, but we'll have to wait for the official launch to find out what these Arm based laptops will retail for. ASUS Vietnam has already removed the page from its website.

Phison Announces Pascari Brand of Enterprise SSDs, Debuts X200 Series Across Key Form-factors

Phison is arguably the most popular brand for SSD controllers in the client segment, but is turning more of attention to the vast enterprise segment. The company had been making first-party enterprise SSDs under its main marquee, but decided that the lineup needed its own brand that enterprise customers could better discern from the controller ASIC main brand. We hence have Pascari and Imagin. Pascari is an entire product family of fully built enterprise SSDs from Phison. The company's existing first-party drives under the main brand will probably migrate to the Pascari catalog. Imagin, on the other hand, is a design service for large cloud and data-center customers, so they could develop bespoke tiered storage solutions at scale.

The Pascari line of enterprise SSDs are designed completely in-house by Phison, feature their latest controllers, firmware, PCB, PMIC, and on-device power-failure protection on select products. The third-party components here are the NAND flash and DRAM chips, which have both been thoroughly evaluated by Phison for the best performance, endurance, and reliability, at their enterprise SSD design facility in Broomfield, Colorado. Phison already had a constellation of industry partners and suppliers to go around with, and the company's drives even power space missions; but the Pascari brand better differentiates the fully-built SSD lineup from the ASIC make. Pascari makes its debut with the X200 series high-performance SSDs for high-access heat data. The drive leverages Phison's latest PCIe Gen 5 controller technology, the most optimized memory components, and availability in all contemporary server storage form-factors.

Lenovo Announces its New AI PC ThinkPad P14s Gen 5 Mobile Workstation Powered by AMD Ryzen PRO Processors

Today, Lenovo launched the Lenovo ThinkPad P14s Gen 5 designed for professionals who need top-notch performance in a portable 14-inch chassis. Featuring a stunning 16:10 display, this mobile workstation is powered by AMD Ryzen PRO 8040 HS-Series processors. These processors are ultra-advanced and energy-efficient, making them perfect for use in thin and light mobile workstations. The AMD Ryzen PRO HS- Series processors also come with built-in Artificial Intelligence (AI) capabilities, including an integrated Neural Processing Unit (NPU) for optimized performance in AI workflows.

The Lenovo ThinkPad P14s Gen 5 is provided with independent software vendor (ISV) certifications and integrated AMD Radeon graphics, making it ideal for running applications like AutoCAD, Revit, and SOLIDWORKS with seamless performance. This mobile workstation is ideal for mobile power users, offering advanced ThinkShield security features and passes comprehensive MIL-SPEC testing for ultimate durability.

Dell XPS Roadmap Leak Spills Beans on Several Upcoming Intel, AMD, and Qualcomm Processors

A product roadmap leak at leading PC OEM Dell, disclosed the tentative launch dates of several future generations of processors by Intel, AMD, and Qualcomm. The slide was detailing hardware platforms for future revisions of the company's premium XPS notebooks. Given that Dell remains one of the largest PC OEMs, the dates revealed in the leaked slides are highly plausible.

In chronological order, Dell expects Intel's Core Ultra 200V series "Lunar Lake-MX" processor in September 2024, which should mean product unveilings at Computex. It's interesting to note that Intel is only designing "Lunar Lake" for the -MX memory-on-package segment. This chip squares off against Apple's M3, M4, and possibly even the M3 Pro. Intel also has its ambitious "Arrow Lake" architecture planned for the second half of 2024, hence the lack of product overlap—there won't be an "Arrow Lake-MX."

TOP500: Frontier Keeps Top Spot, Aurora Officially Becomes the Second Exascale Machine

The 63rd edition of the TOP500 reveals that Frontier has once again claimed the top spot, despite no longer being the only exascale machine on the list. Additionally, a new system has found its way into the Top 10.

The Frontier system at Oak Ridge National Laboratory in Tennessee, USA remains the most powerful system on the list with an HPL score of 1.206 EFlop/s. The system has a total of 8,699,904 combined CPU and GPU cores, an HPE Cray EX architecture that combines 3rd Gen AMD EPYC CPUs optimized for HPC and AI with AMD Instinct MI250X accelerators, and it relies on Cray's Slingshot 11 network for data transfer. On top of that, this machine has an impressive power efficiency rating of 52.93 GFlops/Watt - putting Frontier at the No. 13 spot on the GREEN500.

Intel-powered Aurora Supercomputer Ranks Fastest for AI

At ISC High Performance 2024, Intel announced in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE) that the Aurora supercomputer has broken the exascale barrier at 1.012 exaflops and is the fastest AI system in the world dedicated to AI for open science, achieving 10.6 AI exaflops. Intel will also detail the crucial role of open ecosystems in driving AI-accelerated high performancehigh -performance computing (HPC). "The Aurora supercomputer surpassing exascale will allow it to pave the road to tomorrow's discoveries. From understanding climate patterns to unraveling the mysteries of the universe, supercomputers serve as a compass guiding us toward solving truly difficult scientific challenges that may improve humanity," said Ogi Brkic, Intel vice president and general manager of Data Center AI Solutions.

Designed as an AI-centric system from its inception, Aurora will allow researchers to harness generative AI models to accelerate scientific discovery. Significant progress has been made in Argonne's early AI-driven research. Success stories include mapping the human brain's 80 billion neurons, high-energy particle physics enhanced by deep learning, and drug design and discovery accelerated by machine learning, among others. The Aurora supercomputer is an expansive system with 166 racks, 10,624 compute blades, 21,248 Intel Xeon CPU Max Series processors, and 63,744 Intel Data Center GPU Max Series units, making it one of the world's largest GPU clusters.

NVIDIA Blackwell Platform Pushes the Boundaries of Scientific Computing

Quantum computing. Drug discovery. Fusion energy. Scientific computing and physics-based simulations are poised to make giant steps across domains that benefit humanity as advances in accelerated computing and AI drive the world's next big breakthroughs. NVIDIA unveiled at GTC in March the NVIDIA Blackwell platform, which promises generative AI on trillion-parameter large language models (LLMs) at up to 25x less cost and energy consumption than the NVIDIA Hopper architecture.

Blackwell has powerful implications for AI workloads, and its technology capabilities can also help to deliver discoveries across all types of scientific computing applications, including traditional numerical simulation. By reducing energy costs, accelerated computing and AI drive sustainable computing. Many scientific computing applications already benefit. Weather can be simulated at 200x lower cost and with 300x less energy, while digital twin simulations have 65x lower cost and 58x less energy consumption versus traditional CPU-based systems and others.

NVIDIA Grace Hopper Ignites New Era of AI Supercomputing

Driving a fundamental shift in the high-performance computing industry toward AI-powered systems, NVIDIA today announced nine new supercomputers worldwide are using NVIDIA Grace Hopper Superchips to speed scientific research and discovery. Combined, the systems deliver 200 exaflops, or 200 quintillion calculations per second, of energy-efficient AI processing power.

New Grace Hopper-based supercomputers coming online include EXA1-HE, in France, from CEA and Eviden; Helios at Academic Computer Centre Cyfronet, in Poland, from Hewlett Packard Enterprise (HPE); Alps at the Swiss National Supercomputing Centre, from HPE; JUPITER at the Jülich Supercomputing Centre, in Germany; DeltaAI at the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign; and Miyabi at Japan's Joint Center for Advanced High Performance Computing - established between the Center for Computational Sciences at the University of Tsukuba and the Information Technology Center at the University of Tokyo.

Apple Inches Closer to a Deal with OpenAI to Bring ChatGPT Technology to iPhone

To bring cutting-edge artificial intelligence capabilities to its flagship product, Apple is said to be finalizing a deal with OpenAI to integrate the ChatGPT technology into the upcoming iOS 18 for iPhones. According to Bloomberg, multiple sources report that after months of negotiations, the two tech giants are putting the finishing touches on a partnership that would be an important moment for consumer AI. However, OpenAI may not be Apple's only AI ally. The company has also reportedly been in talks with Google over licensing the Gemini chatbot, though no known agreement has been reached yet. The rare team-up between the fiercely competitive firms underscores the intense focus on AI integration across the industry.

Apple's strategic moves are a clear indication of its recognition of the transformative potential of advanced AI capabilities for the iPhone experience. The integration of OpenAI's language model could empower Siri to understand and respond to complex voice queries with deep contextual awareness. This could revolutionize the way Apple's customers interact with devices, offering hope for a more intuitive and advanced iPhone experience. Potential Gemini integration opens up another realm of possibilities around Google's image and multimodal AI capabilities. Future iPhones may be able to analyze and describe visual scenes, annotate images, generate custom imagery from natural language prompts, and even synthesize audio using AI vocals - all within a conversational interface. As the AI arms race intensifies, Apple wants to position itself at the forefront through these partnerships.
Apple and OpenAI

ASRock Announces Intel Arc GPU Version AI QuickSet Software Tool With OpenVINO Support

Leading global motherboard manufacturer, ASRock, has successively released its AI QuickSet software tool based on Microsoft Windows 10/11 and Canonical Ubuntu Linux platforms since the end of last year, which can help users quickly download, install and set up many artificial intelligence (AI) applications. It can also be accelerated by ASRock's custom AMD graphics cards, and has received widespread response from the market. Today ASRock continues its efforts and launch the AI QuickSet software tool that supports ASRock's custom Intel Arc A-Series graphics cards, including Intel Arc A770, A750, A580, A380, and A310 models, allowing users to enjoy the fun brought by generative artificial intelligence (AI) applications at their fingertips!

ASRock AI QuickSet software tool v1.0.3i supports Microsoft Windows 10/11 64-bit operating system, allowing users to easily install Stable Diffusion web UI OpenVINO without delving into complex configuration settings. This Stable Diffusion Web UI artificial intelligence (AI) image generation tool optimized through the Intel OpenVINO tool suite can be used with ASRock's own Intel Arc A series graphics cards (including Intel Arc A770, A750, A580, A380, A310 model)'s powerful computing capabilities provide excellent operational performance. This once again demonstrates ASRock's strong software and hardware R&D capabilities and its spirit of considering users. It also makes ASRock's AI QuickSet software tool once again a premium choice for consumers to quickly experience the fun of generative artificial intelligence (AI) applications!

NVIDIA "Blackwell" Successor Codenamed "Rubin," Coming in Late-2025

NVIDIA barely started shipping its "Blackwell" line of AI GPUs, and its next-generation architecture is already on the horizon. Codenamed "Rubin," after Vera Rubin, the new architecture will power NVIDIA's future AI GPUs with generational jumps in performance, but more importantly, a design focus on lowering the power draw. This will become especially important as NVIDIA's current architectures already approach the kilowatt range, and cannot scale boundlessly. TF International Securities analyst, Mich-Chi Kuo says that NVIDIA's first AI GPU based on "Rubin," the R100 (not to be confused with an ATI GPU from many moons ago); is expected to enter mass-production in Q4-2025, which means it could be unveiled and demonstrated sooner than that; and select customers could have access to the silicon sooner, for evaluations.

The R100, according to Mich-Chi Kuo, is expected to leverage TSMC's 3 nm EUV FinFET process, specifically the TSMC-N3 node. In comparison, the new "Blackwell" B100 uses the TSMC-N4P. This will be a chiplet GPU, and use a 4x reticle design compared to Blackwell's 3.3x reticle design, and use TSMC's CoWoS-L packaging, just like the B100. The silicon is expected to be among the first users of HBM4 stacked memory, and feature 8 stacks of a yet unknown stack height. The Grace Ruben GR200 CPU+GPU combo could feature a refreshed "Grace" CPU built on the 3 nm node, likely an optical shrink meant to reduce power. A Q4-2025 mass-production roadmap target would mean that customers will start receiving the chips by early 2026.

SK hynix Develops Next-Generation Mobile NAND Solution ZUFS 4.0

SK hynix announced today that it has developed the Zoned UFS, or ZUFS 4.0, a mobile NAND solution product for on-device AI applications. SK hynix said that the ZUFS 4.0, optimized for on-device AI from mobile devices such as smartphones, is the industry's best of its kind. The company expects the latest product to help expand its AI memory leadership to the NAND space, extending its success in the high-performance DRAM represented by HBM.

The ZUFS is a differentiated technology that classifies and stores data generated from smartphones in different zones in accordance with characteristics. Unlike a conventional UFS, the latest product groups and stores data with similar purposes and frequencies in separate zones, boosting the speed of a smartphone's operating system and management efficiency of the storage devices.

NVIDIA CEO Jensen Huang to Deliver Keynote Ahead of COMPUTEX 2024

Amid an AI revolution sweeping through trillion-dollar industries worldwide, NVIDIA founder and CEO Jensen Huang will deliver a keynote address ahead of COMPUTEX 2024, in Taipei, outlining what's next for the AI ecosystem. Slated for June 2 at the National Taiwan University Sports Center, the address kicks off before the COMPUTEX trade show scheduled to run from June 3-6 at the Taipei Nangang Exhibition Center. The keynote will be livestreamed at 7 p.m. Taiwan time (4 a.m. PT) on Sunday, June 2, with a replay available at NVIDIA.com.

With over 1,500 exhibitors from 26 countries and an expected crowd of 50,000 attendees, COMPUTEX is one of the world's premier technology events. It has long showcased the vibrant technology ecosystem anchored by Taiwan and has become a launching pad for the cutting-edge systems required to scale AI globally. As a leader in AI, NVIDIA continues to nurture and expand the AI ecosystem. Last year, Huang's keynote and appearances in partner press conferences exemplified NVIDIA's role in helping advance partners across the technology industry.

SpiNNcloud Systems Announces First Commercially Available Neuromorphic Supercomputer

Today, in advance of ISC High Performance 2024, SpiNNcloud Systems announced the commercial availability of its SpiNNaker2 platform, a supercomputer-level hybrid AI high-performance computer system based on principles of the human brain. Pioneered by Steve Furber, designer of the original ARM and SpiNNaker1 architectures, the SpiNNaker2 supercomputing platform uses a large number of low-power processors for efficiently computing AI and other workloads.

First-generation SpiNNaker1 architecture is currently used in dozens of research groups across 23 countries worldwide. Sandia National Laboratories, Technical University of München and Universität Göttingen are among the first customers placing orders for SpiNNaker2, which was developed around commercialized IP invented in the Human Brain Project, a billion-euro research project funded by the European Union to design intelligent, efficient artificial systems.

Core Configurations of Intel Core Ultra 200 "Arrow Lake-S" Desktop Processors Surface

Intel is giving its next-generation desktop processor lineup the Core Ultra 200 series processor model numbering. We detailed the processor numbering in our older report. The Core Ultra 200 series would be the company's first desktop processors with AI capabilities thanks to an integrated 50 TOPS-class NPU. At the heart of these processors is the "Arrow Lake" microarchitecture. Its development is the reason the company had to refresh "Raptor Lake" to cover its 2023-24 processor lineup. The company's "Meteor Lake" microarchitecture topped off at CPU core counts of 6P+8E, which would have proven to be a generational regression in multithreaded application performance over "Raptor Lake." The new "Arrow Lake-S" desktop processor has a maximum CPU core configuration of 8P+16E, which means consumers can expect at least the same core-counts at given price-points to carry over.

According to a report by Chinese tech publication Benchlife.info, the introduction of "Arrow Lake" would see Intel's desktop processor model numbering align with that of its mobile processor numbering, and incorporate the Core Ultra brand to denote the latest microarchitecture for a given processor generation. Since "Arrow Lake" is a generation ahead of "Meteor Lake," processor models in the series get numbered under Core Ultra 200 series.

Report: 3 Out of 4 Laptop PCs Sold in 2027 will be AI Laptop PCs

Personal computers (PCs) have been used as the major productivity device for several decades. But now we are entering a new era of PCs based on artificial intelligence (AI), thanks to the boom witnessed in generative AI (GenAI). We believe the inventory correction and demand weakness in the global PC market have already normalized, with the impacts from COVID-19 largely being factored in. All this has created a comparatively healthy backdrop for reshaping the PC industry. Counterpoint estimates that almost half a billion AI laptop PCs will be sold during the 2023-2027 period, with AI PCs reviving the replacement demand.

Counterpoint separates GenAI laptop PCs into three categories - AI basic laptop, AI-advanced laptop and AI-capable laptop - based on different levels of computational performance, corresponding use cases and the efficiency of computational performance. We believe AI basic laptops, which are already in the market, can perform basic AI tasks but not completely GenAI tasks and, starting this year, will be supplanted by more AI-advanced and AI-capable models with enough TOPS (tera operations per second) powered by NPU (neural processing unit) or GPU (graphics processing unit) to perform the advanced GenAI tasks really well.

AMD "Ryzen AI 9 HX170" Surfaces, Suggests New Naming Scheme for Ultraportable Processors

AMD is preparing a new processor naming scheme for its next-generation processors, targeting the ultraportable segment, according to a report by Chinese tech publication ITHome, citing sources in ASUS. The new naming scheme purports to make it easier for customers to identify processors with AI capabilities (integrated NPU), processor class (whether it's U, H, or HX), followed by a numerical component that lets customer know product grade. This runs contrary to yesterday's report that cited a Lenovo product flyer referencing a "Ryzen 8050 series." It remains to be seen if the 8050 series are a class of mainstream processors without AI capabilities, which is unlikely, given that Lenovo is using them with its premium ThinkPad T-series.

MINISFORUM Announces AtomMan X7 Ti The World's First Intel Ultra 9 AI Mini PC

Recently, MINISFORUM officially starts showing the AtomMan X7 Ti on its website, which is the world's first Intel Ultra 9 AI Mini PC equipped with a dynamic screen. The X7 Ti pre-sale will begin at 19:00 PST on May 20th in the MINISFORUM official store. AtomMan is MINISFORUM's new high-end brand dedicated to developing cutting-edge and high-performance tech products. Currently, the AtomMan brand consists of two sub-series: X Series (Exploration/AI) and G Series (Gaming).

The AtomMan X7 Ti features an Intel Core Ultra 9 processor, built on Intel's 4 nm process technology. It boasts 22 threads, 16 cores, a maximum frequency of 5.1 GHz, 24 MB of L3 cache, and a TDP of 65 W. The integrated Arc Iris Xe graphics come with 8 Xe cores and 8 additional ray tracing units, supporting AV1 encoding and decoding. With 128 execution units (FP32 cores), it also supports XeSS sampling technology, significantly enhancing 3D rendering, video editing, and live streaming workflows, making it ideal for playing AAA games.

Razer Introduces Razer Cortex: Add-Ons

Today we are thrilled to announce the latest innovation that promises to transform your gaming experience: Razer Cortex: Add-Ons. Our commitment at Razer has always been to push the boundaries of gaming technology and provide our users with the tools they need to succeed, whether they're casual gamers or competitive athletes. Today, we're taking another giant leap forward in fulfilling that promise, with the introduction of Cortex: Add-Ons - a new feature, crafted with the aim of maximizing your gaming experience.

What is Razer Cortex: Add-Ons?
Cortex: Add-Ons is your one-stop destination for all your gaming plug-in needs. We understand that gaming is not one-size-fits-all, which is why we're bringing you a platform that caters to your individual needs. Whether it's optimizing your performance, sharing your gaming highlights, or enriching your gaming sessions with unique tools, we've got you covered.

Apple Unveils Stunning New iPad Pro With the World's Most Advanced Display, M4 Chip and Apple Pencil Pro

Apple today unveiled the groundbreaking new iPad Pro in a stunningly thin and light design, taking portability and performance to the next level. Available in silver and space black finishes, the new iPad Pro comes in two sizes: an expansive 13-inch model and a super-portable 11-inch model. Both sizes feature the world's most advanced display—a new breakthrough Ultra Retina XDR display with state-of-the-art tandem OLED technology—providing a remarkable visual experience. The new iPad Pro is made possible with the new M4 chip, the next generation of Apple silicon, which delivers a huge leap in performance and capabilities. M4 features an entirely new display engine to enable the precision, color, and brightness of the Ultra Retina XDR display. With a new CPU, a next-generation GPU that builds upon the GPU architecture debuted on M3, and the most powerful Neural Engine yet, the new iPad Pro is an outrageously powerful device for artificial intelligence. The versatility and advanced capabilities of iPad Pro are also enhanced with all-new accessories. Apple Pencil Pro brings powerful new interactions that take the pencil experience even further, and a new thinner, lighter Magic Keyboard is packed with incredible features. The new iPad Pro, Apple Pencil Pro, and Magic Keyboard are available to order starting today, with availability in stores beginning Wednesday, May 15.

"iPad Pro empowers a broad set of pros and is perfect for anyone who wants the ultimate iPad experience—with its combination of the world's best displays, extraordinary performance of our latest M-series chips, and advanced accessories—all in a portable design. Today, we're taking it even further with the new, stunningly thin and light iPad Pro, our biggest update ever to iPad Pro," said John Ternus, Apple's senior vice president of Hardware Engineering. "With the breakthrough Ultra Retina XDR display, the next-level performance of M4, incredible AI capabilities, and support for the all-new Apple Pencil Pro and Magic Keyboard, there's no device like the new iPad Pro."

Apple Introduces the M4 Chip

Apple today announced M4, the latest chip delivering phenomenal performance to the all-new iPad Pro. Built using second-generation 3-nanometer technology, M4 is a system on a chip (SoC) that advances the industry-leading power efficiency of Apple silicon and enables the incredibly thin design of iPad Pro. It also features an entirely new display engine to drive the stunning precision, color, and brightness of the breakthrough Ultra Retina XDR display on iPad Pro. A new CPU has up to 10 cores, while the new 10-core GPU builds on the next-generation GPU architecture introduced in M3, and brings Dynamic Caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to iPad for the first time. M4 has Apple's fastest Neural Engine ever, capable of up to 38 trillion operations per second, which is faster than the neural processing unit of any AI PC today. Combined with faster memory bandwidth, along with next-generation machine learning (ML) accelerators in the CPU, and a high-performance GPU, M4 makes the new iPad Pro an outrageously powerful device for artificial intelligence.

"The new iPad Pro with M4 is a great example of how building best-in-class custom silicon enables breakthrough products," said Johny Srouji, Apple's senior vice president of Hardware Technologies. "The power-efficient performance of M4, along with its new display engine, makes the thin design and game-changing display of iPad Pro possible, while fundamental improvements to the CPU, GPU, Neural Engine, and memory system make M4 extremely well suited for the latest applications leveraging AI. Altogether, this new chip makes iPad Pro the most powerful device of its kind."

Apple Reportedly Developing Custom Data Center Processors with Focus on AI Inference

Apple is reportedly working on creating in-house chips designed explicitly for its data centers. This news comes from a recent report by the Wall Street Journal, which highlights the company's efforts to enhance its data processing capabilities and reduce dependency on third parties to supply the infrastructure. In the internal project called Apple Chips in Data Center (ACDC), which started in 2018, Apple wanted to design data center processors to handle the massive user base and increase the company's service offerings. The most recent advancement in AI means that Apple will probably serve an LLM processed in Apple's data center. The chip will most likely focus on inference of AI models rather than training.

The AI chips are expected to play a crucial role in improving the efficiency and speed of Apple's data centers, which handle vast amounts of data generated by the company's various services and products. By developing these custom chips, Apple aims to optimize its data processing and storage capabilities, ultimately leading to better user experiences across its ecosystem. The move by Apple to develop AI-enhanced chips for data centers is seen as a strategic step in the company's efforts to stay ahead in the competitive tech landscape. Almost all major tech companies, famously called the big seven, have products that use AI in silicon and in software processing. However, Apple is the one that seemingly lacked that. Now, the company is integrating AI across the entire vertical, from the upcoming iPhone integration to M4 chips for Mac devices and ACDC chips for data centers.

Microsoft Prepares MAI-1 In-House AI Model with 500B Parameters

According to The Information, Microsoft is developing a new AI model, internally named MAI-1, designed to compete with the leading models from Google, Anthropic, and OpenAI. This significant step forward in the tech giant's AI capabilities is boosted by Mustafa Suleyman, the former Google AI leader who previously served as CEO of Inflection AI before Microsoft acquired the majority of its staff and intellectual property for $650 million in March. MAI-1 is a custom Microsoft creation that utilizes training data and technology from Inflection but is not a transferred model. It is also distinct from Inflection's previously released Pi models, as confirmed by two Microsoft insiders familiar with the project. With approximately 500 billion parameters, MAI-1 will be significantly larger than its predecessors, surpassing the capabilities of Microsoft's smaller, open-source models.

For comparison, OpenAI's GPT-4 boasts 1.8 trillion parameters in a Mixture of Experts sparse design, while open-source models from Meta and Mistral feature 70 billion parameters dense. Microsoft's investment in MAI-1 highlights its commitment to staying competitive in the rapidly evolving AI landscape. The development of this large-scale model represents a significant step forward for the tech giant, as it seeks to challenge industry leaders in the field. The increased computing power, training data, and financial resources required for MAI-1 demonstrate Microsoft's dedication to pushing the boundaries of AI capabilities and intention to compete on its own. With the involvement of Mustafa Suleyman, a renowned expert in AI, the company is well-positioned to make significant strides in this field.

NVIDIA Advertises "Premium AI PC" Mocking the Compute Capability of Regular AI PCs

According to the report from BenchLife, NVIDIA has started the marketing campaign push for "Premium AI PC," squarely aimed at the industry's latest trend pushed by Intel, AMD, and Qualcomm for an "AI PC" system, which features a dedicated NPU for processing smaller models locally. NVIDIA's approach comes from a different point of view: every PC with an RTX GPU is a "Premium AI PC," which holds a lot of truth. Generally, GPUs (regardless of the manufacturer) hold more computing potential than the CPU and NPU combined. With NVIDIA's push to include Tensor cores in its GPUs, the company is preparing for next-generation software from vendors and OS providers that will harness the power of these powerful silicon pieces and embed more functionality in the PC.

At the Computex event in Taiwan, there should be more details about Premium AI PCs and general AI PCs. In its marketing materials, NVIDIA compares AI PCs to its Premium AI PCs, which have enhanced capabilities across various applications like image/video editing and upscaling, productivity, gaming, and developer applications. Another relevant selling point is the user base for these Premium AI PCs, which NVIDIA touts to be 100 million users. Those PCs support over 500 AI applications out of the box, highlighting the importance of proper software support. NVIDIA's systems are usually more powerful, with GeForce RTX GPUs reaching anywhere from 100-1300+ TOPS, compared to 40 TOPS of AI PCs. How other AI PC makers plan to fight in the AI PC era remains to be seen, but there is a high chance that this will be the spotlight of the upcoming Computex show.
Return to Keyword Browsing
May 18th, 2024 04:18 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts