Machine learning and artificial intelligence are quickly becoming the way to reshape the corporate IT backbone and make companies a top company.

In September of this year, giants such as Microsoft, Google, Facebook, IBM and Microsoft announced the establishment of an AI cooperation organization. Now they are investing heavily in ML / AI hardware design to greatly accelerate next-generation applications. Here's their stacking plan: Intel What it's doing: The world's best-known chipmakers have recently introduced a new CPU family specifically for ML applications: Knights Mill. It also mentions plans to combine its CPU with a reprogrammable FPGA processor, a powerful yet fully developed Intel technology.

Why do this: As the PC market continues to “melt,” like Arctic glaciers, Intel has been looking for ways to make up the difference. The server product itself will not do this, so Intel has expanded its need for main processors and coprocessors to accelerate ML functionality.

However, Intel is unlikely to provide its own GPU for ML work. Intel and GPU efforts have never reached the level of other processor manufacturers, but it always believes that its CPU-specific improvements can outperform the GPU. After all, Intel only wants to create an environment where its CPU alone - not mixed with another company's GPU - to power the future.
Microsoft What it is doing: After designing the Microsoft Azure cloud for a specific design FPGA, adding machine learning acceleration to its cluster, Microsoft is talking about allowing customers to program devices directly to enable more powerful machine learning tools in the cloud.

Why do this: Microsoft has provided ML/AI tools inside and outside Azure. But now Microsoft is considering a new way to provide machine learning hardware for cloud customers. The hard part is that FPGAs are complex for programming and are not well understood for ML as a GPU.
Google What it is doing: Google has done in-depth machine learning on software such as TensorFlow, but now offers a hardware supplement - the Tensor processing unit - to speed up specific machine learning functions.

Why do this: Like Microsoft, Google wants its cloud to be the premier destination for ML applications. Google has made it clear that it wants to stand out and be easier to use, so it's unlikely to consider low-level access to ML hardware. If people want to access machine learning hardware directly in a familiar environment, there is always a new GPU instance of Google Cloud. The possibility is that two hardware products will work together.
What IBM is doing: IBM's new machine learning toolset PowerAI runs a mix of IBM's Power processors and Nvidia GPUs, connected together with new dedicated hardware, designed to connect CPUs and GPUs as closely as possible.

Why do this: IBM already has a well-known ML / AI project: Watson. But Watson was conceived and offered primarily as a black box service. PowerAI is a hardware suite, not a specific processor or GPU, for high-end customers who want full control of their capabilities and how to use it. It is in line with IBM's plans for the Power processor family, which is built around the big data and cloud applications that machine learning workloads apply.
Giants such as IBM and Microsoft invest in hardware design

Ge Windows And Lens

Germanium Optical Properties
Germanium Optical Windows have good transmittance from 2 to 16 µm uncoated, and very low peak absorption, making the material an excellent candidate for use in C02 laser cavities. Germanium also has high opacity across the entire visible spectrum, making the windows ideal for applications where transmission of IR wavelengths only is desired.

Ge Windows And Lens,Lwir Ge Windows,Single Crystal Ir Germanium,Germanium With Dlc Coating

Zoolied Inc. , https://www.zoolied.com