Hardware collector YJFY has posted images of what they claim is Intel’s 2nd Generation Larrabee graphics card — which never came to market, but existed as hardware evaluation samples. Intel’s Larrabee 2 graphics board was meant to be based on the chip eventually known as the Knights Corner, and this is the first time we’ve seen (supposed) images of this device published.
The alleged 2nd Generation Larrabee graphics card carries a processor that looks exactly like Intel’s Knights Corner, which was demonstrated by and Intel exec at the SC11 conference in Nov. 2011. The processor is an engineering sample produced in late 2011 featuring the QBAY stepping. It allegedly features 60 cores and operates at 1.00 GHz, which corresponds to specifications of Intel’s KNC. Unlike production Xeon Phi ‘Knights Corner’ products, this processor is paired with 4GB of GDDR5 memory.
The board is clearly a very early evaluation sample, with diagnostic LEDs, multiple connectors for probes, and various jumpers. It also features a DVI connector that is typically used for video output. Keeping in mind that Tom Forsyth, a developer of the Larrabee project at Intel, once said that the company’s Knights Corner silicon still feature GPU parts like graphics outputs and texture samples, we’re not surprised to see a DVI connector on a KNC-based board.
While we cannot be sure that the card in the picture is indeed Larrabee 2 based on the Knights Corner silicon, there’s a lot of direct and indirect evidence that we’re dealing with the 2nd Generation Larrabee.
“Remember — KNC is literally the same chip as LRB2. It has texture samplers and a video out port sitting on the die,” Forsyth said. “They don’t test them or turn them on or expose them to software, but they are still there – it is still a graphics-capable part.”
Intel’s codenamed Larrabee product was meant to be a client PC-oriented graphics processor and a high-performance computing co-processor based on 4-way Hyper-Threaded Atom-like x86 cores with AVX-512 extensions that were meant to deliver flexible programmability and competitive performance. After Intel determined that Larrabee did not live up to expectations in graphics workloads (as it was still largely a CPU with graphics capabilities), it switched the project entirely to HPC workloads, which is how its Xeon Phi was born.
“[In 2005, when Larrabee was conceived, Intel] needed something that was CPU-like to program, but GPU-like in number crunching power,” said Forsyth. “[…] The design of Larrabee was of a CPU with a very wide SIMD unit, designed above all to be a real grown-up CPU — coherent caches, well-ordered memory rules, good memory protection, true multitasking, real threads, runs Linux/FreeBSD, etc.”
But eventually Intel’s Xeon Phi, the MIC (many integrated core) architecture, and other massively-parallel CPU architectures (Sony’s Cell, Sun’s Niagara) failed to offer competitive performance against Nvidia’s compute GPUs, which is why Intel eventually decided to re-enter discrete graphics GPU business with Arc GPUs and introduce its own Ponte Vecchio compute GPUs.