Our Maia 200 inference chip, announced today, is most performant first party silicon of any hyperscaler. 3x the FP4 performance of the Amazon Trainium v3, and FP8 performance above Google’s TPUv7.