深夜深聊献给最难熬的日子(深液话题)

深夜深聊献给最难熬的日子(深液话题)(1)

访谈:液冷无法避免Podcast Is Liquid Cooling Inevitable

Apri 11, 2022 By Max Smolaks

译 者 说

行业在朝着高密度、可持续方向发展,液冷有几种方式适用不同场景,液冷在IT设备冷却的渗透逐年增加。

即将推出的主流服务器CPU预计将消耗约400W的功率,而GPU的功耗已经超过了400W(英伟达的H100每块电路板需要700W),再加上每台服务器的内存容量达到TB,未来的尖端IT工作负载几乎不可能仅靠空气来冷却。

With upcoming mainstream server CPUs expected to consume around 400W of power, GPUs already consuming more than that (Nvidia’s H100 requires ~700W per board), and with terabytes of memory per server, the cutting-edge IT workloads of the future will be almost impossible to cool with air alone.

液冷来解决

Liquid cooling to the rescue

在本次播客中,我们将讨论现代处理器的热设计要点,边缘计算的冷却要求,以及新的工作负载对功耗的影响。

In this podcast, we discuss thermal design points of modern processors, the cooling requirements of edge computing, and the impact of new workloads on power consumption.

我们还将探讨液体冷却广泛应用所面临的一些障碍,以及保持服务器风扇持久工作的重要性。

We also look at some of the barriers facing the wider deployment of liquid cooling, and the enduring importance of server fans.

Mattur告诉DCK:“这需要一些时间。“整个数据中心不会在一夜之间发生彻底改变。”

“This is going to take some time,” Mattur told Data Center Knowledge. “This is not an overnight, forklift change to the entire data center."

Max Smolaks:大家好,欢迎来到“线上DCK”,这是为您带来全球数据中心行业的新闻和观点的播客。我是DCK高级编辑Max Smolaks,在这节播客中,我们将讨论数据中心的液体冷却,它现在在哪里,它将走向何方?为了更详细地了解这个主题,我邀请了思科硬件工程总监Dattatri Mattur。你好,Dattatri,欢迎来到我们的节目。

Max Smolaks: Hello and welcome to Uptime with Data Center Knowledge, the podcast that brings you the news and views from the global data center industry. I'm Max Smolaks, senior editor at Data Center Knowledge, and in this episode we will discuss liquid cooling in the data center, where it is today, and where it is going. To look at the subject in detail, I'm pleased to welcome Dattatri Mattur, director of hardware engineering at Cisco. Hello, Dattatri, and welcome to the show.

Dattatri Mattur:麦克斯,谢谢你主持这次活动。我很高兴能谈谈我们在思科的液态冷却方面所做的工作。

Dattatri Mattur: Thank you, Max, for hosting this event. And I'm excited to talk about what we are doing for liquid cooling at Cisco.

MS: 绝对的。首先,你能告诉我们为什么要研究液体冷却吗?毕竟,思科不生产冷却设备,至少据我所知不是。那你为什么感兴趣呢?

MS: Absolutely. And first, can you tell us why you're investigating liquid cooling? After all, Cisco does not make cooling equipment, at least not to my knowledge. So why why are you interested?

DM: 当然,我可以告诉你为什么我们感兴趣。我知道很多人把思科误认为是生产公司。类似的,人们想知道为什么思科在这里使用液体冷却。在过去的几年里发生了什么,也许是在过去的五年里,你可以看到系统中各个组件消耗的能量有了显著的增长。

DM: Absolutely, I can tell you why we are interested. I know many people mistake Cisco for Sysco, the produce company. Similar to that, people are wondering why Cisco is into liquid cooling here. What is happening in the last few years, or maybe in the last five years, you see the power consumed by the various components in the system has gone up significantly.

举个例子,我们市场上的M3一代服务器的CPU TDP大约是140瓦。在下一代M7中,我们将在今年晚些时候发布,我们的CPU消耗将达到350瓦。差不多是3倍。在几年内,它将超过400瓦。所以光靠风冷是不可能冷却这些CPU的,我们需要找到一种不同的机制——因此液冷越来越受欢迎。

And just for example, the M3 generation of servers that we had in the market has had CPU TDP at something like 140 watts. In the next generation, which is going to be M7 that we will be shipping later this year, we'll have CPU consuming in the north of 350 watts. So it's almost 3x. And within few years, it will exceed 400 watts. So it is going to become impossible to cool these CPUs just with air cooling, we need to find a different mechanism - hence liquid cooling is getting more traction.

MS:工程师们已经尝试将液体冷却引入传统数据中心至少10年了,如果我们谈论的是高性能计算系统,可能会更久。那么围绕这个话题的对话是如何改变的呢?比如,什么是驱动因素?如果我们从最基本的说起——为什么芯片变得如此强大?

MS: And engineers have been trying to introduce liquid cooling into the traditional data center for at least a decade, probably longer if we're talking about HPC systems. So how has the conversation around the subject changed? You know, like, what are the drivers? If we want to go to the basics - why are chips becoming so powerful?

DM:有很多驱动因素,我将它们分为三个主要方面。一个是组件功率。我会讲一些细节。第二点是可持续性和监管的问题,你们已经听说了很多国家在2040或2045年实现了零排放。第三个推动液态冷却的因素是边缘侧的增长,人们开始在边缘部署计算和网络基础设施。这是非常小,封闭的空间,他们应该能够冷却,这是原因之一。

DM: There are many drivers and I'll classify them into key three aspects. One is the component power. And I'll get into some detail. The second one is the sustainability and regulatory things are kicking in, you have been hearing about net zero by 2040 or 2045 by various countries. And then third one that is going to drive the liquid cooling is edge growth, as people start deploying compute and network infrastructure in the edge. It's very small, close spaces that they should be able to cool, and that's one of the reasons.

现在回到组件功率。CPU TDP已经显著提高,正如我在之前的两个问题中所述;到2025年,将从100瓦增加到400瓦。存储是另一件正在爆炸的事情,存储的力量。在思科,我们已经有64GB和128GB的内存模块是最畅销的。现在,典型的服务器平均拥有800GB到TB的内存。所以,存储能力在提高。第三个推动(服务器功耗)的因素是GPU, AI和ML的出现,这推动了调用GPU的需求,GPU的功率在300到400瓦之间。

Now going back to the component power. The CPU TDP have gone up significantly, as I stated in the two previous questions; you're going from 100 watts to 400 watts by 2025. Memory is another thing is exploding, memory power. At Cisco, we are already have 64GB and 128GB as our highest sellers of memory modules. Typical server now goes out with an average 800GB to a Terabyte of memory. So, the memory power is going up. And the third thing which is driving [server power consumption] is the GPU, the emergence of AI and ML, and that drives the requirement to call the GPUs which are in the vicinity of 300 to 400 watts.

当然,在可持续性方面,各国正在强制要求新的电力使用效率,要求数据中心不能超过1.3,有些国家甚至正在积极提高到1.2或1.25。他们如何使它满足这些要求,成为净零标准是一个关键的驱动因素,在十年内,使液体冷却成为强制性的措施。

Of course, on sustainability, countries are mandating new power usage effectiveness, saying that data center cannot exceed beyond 1.3 and in some countries, they are even getting aggressive to 1.2, or 1.25. How can they make this meet these requirements and become a net zero compliant is one of the key drivers which is making this move to liquid cooling becoming mandatory, as we get towards the later part of this decade.

MS:好的,谢谢。所以,我们谈论了一些为什么客户感兴趣的驱动因素,但是当你把这些系统引入数据中心时会发生什么?液冷系统在数据中心运行方面有什么好处,或者有什么缺点?因为,要实现液体冷却,你本质上需要重新设计你的数据大厅,你需要做相当大的调整。那么,人们为什么要这样做呢?

MS: Okay, thank you. So, we talked a little bit about the drivers of why the customers are interested, but what happens when you introduce these systems into the data center? What are some benefits, maybe some drawbacks of liquid cooling systems in terms of data center operation? Because, to implement liquid cooling, you essentially need to re-architect your data hall, you need to make considerable adjustments. So, why are people doing that?

DM:当然,我们谈到了驱动因素,为什么人们都在考虑采用液体冷却,但让我告诉你它的缺点。除了超大规模的,许多企业和商业用户都有过去10-15年建立的数据中心。他们不可能想要继续前进并修改它以适应新的液体冷却标准或方法。这是最大的阿喀琉斯之踵,我称之为我们的问题。话虽如此,他们也看到了调整的必要性,无论是从可持续发展的角度,还是从绩效和TCO的角度。您的工作负载需求,以及扩展技术的各个方面,迫使他们适应这些更新的GPU、更新的CPU和更多的内存,他们别无选择。

DM: Of course, we talked about the drivers, why the people are looking at adopting liquid cooling, but let me tell you the disadvantages. Other than hyperscalers, a lot of enterprise and commercial users have data centers which are built out over the last 10-15 years. They cannot want lgo ahead and modify this to adapt to new liquid cooling standards or methodologies. And that's the biggest Achilles' heel, I would call our problem. Having said that, they're also seeing the necessity why they had to adapt, both from a sustainability standpoint, as well as from a performance and TCO standpoint. Your workload requirements, and the the various aspects of scale-out technology, compelling them to adapt these newer GPUs, newer CPUs, more memory, and they are left with no choice.

所以,这就是他们问像我们这样的供应商,如何让我们进化成液体冷却?因为这需要一些时间,对整个数据中心来说,这不是一夜之间就能完成的改变,它需要几年的时间才能完成。在大多数现有的数据中心,他们将对其进行改造。在一些新的部署中,他们可能从头开始,在那里他们有所有的灵活性来重做数据中心。但我们的重点是:我们如何使现有的客户进化和适应液体冷却?

So, this is where they are coming in asking vendors like us, how can you make us evolve into liquid cooling? Because this is going to take some time, this is not an overnight forklift change for the entire data center, it will progress over the years. In most of the the existing data centers, they will retrofit this. In some of the greenfield deployments, they might start from scratch, where they have all the flexibility to redo the data center. But our focus is: how do we enable our existing customers to evolve and adapt to the liquid cooling?

MS:是的,这就像数据中心曾经有点像电气工程师的领域,但很快它将成为管道工程师的领域,管道,垫圈和所有这些。你认为这项技术受到了传统的影响吗?因为你指出它起源于PC爱好者和游戏硬件?我想现在是时候问一下:你的电脑是水冷的吗?

MS: Yeah, and this just like data center used to be sort of like the domain of the electrical engineer, but soon it's going to become the domain of the plumbing engineer, and the pipes and the gaskets and all of that. Do you think this technology has suffered from its heritage? Because you have pointed out that it originated with PC enthusiasts and gaming hardware? And I think right now is a good time to ask: is your PC water-cooled?

DM:是的。我儿子玩游戏,几年前我给他买了一台桌面游戏。现在他已经长大了。今年夏天,我答应他买一个新的,是水冷的,我们已经在看什么新的东西是什么?他应该适应什么?你是绝对正确的。

DM: Yes. So my son plays gaming, I bought him a gaming desktop a few years back. And now he has outgrown that. And this summer, I'm promising him to get a new one, which is going to be water-cooled, we are already looking at what what are the new things? And what what is that he should be adapting? You're absolutely right.

话虽如此,水冷却也有它自己的麻烦。没有人,我们的客户愿意听到水冷却和机架泄漏的消息,一个泄漏就会导致整个机架或几个机架倒塌。这是这项技术最大的缺点,为什么人们很犹豫去适应。我已经说过,像任何其他技术一样,它也在发展,我们可以讨论更多关于冷却的不同类型,我们如何解决这些与泄漏等相关的挑战。我们要怎么做才能使它非常可靠呢?

Having said that, the water cooling comes with its own headaches. Nobody, none of our customers want to hear about water cooling and leaks in the racks, where one leak can bring down the entire rack or several racks. That is the biggest drawback of this technology, why people are very hesitant to adapt. Having said that, like any any other technology, it has evolved and we can discuss more about what are the different types of cooling, how we are addressing some of these challenges associated with leaks and whatnot. And what are we doing to make it very reliable?

MS:当然。这是至关重要的基础设施。停机时间是可能发生的最糟糕的事情。所以任何导致停机的东西都是敌人。你已经提到了这个,但是有几种液体冷却的方法,有浸没式冷却,它只是大体积冷却箱的介电流体,直接到芯片冷却,这涉及到更多的管道。所以,你已经详细研究了这个问题,在所有这些变体中,你认为在近期内,哪个变体在数据中心中拥有最多的分支?

MS: Absolutely. It's mission-critical infrastructure. And downtime is pretty much the worst thing that can happen. So anything, anything that causes downtime is the enemy. So, so you've mentioned this, but there are several approaches to liquid cooling, there is immersion cooling, where it's just large vats of dielectric fluid, and direct to chip cooling, which involves a lot more pipes. So, you've looked into this in detail, and among all of these variants, which one do you think has the most legs in the data center in the near term?

DM:当然,但在我提供思科的观点或我个人的观点之前,我想让你高层次地了解一下哪些不同的技术正在受到关注。所以,如果你从液冷技术的角度来看,它正在被应用于数据中心,包括计算和一些高端网络设备,我们可以把它们分为两大类。一种是浸没式冷却,另一种是冷板式冷却。浸没式冷却,同样,你可以把它分成两种或三种:单相浸没式或两相浸没式,两者都有优点,缺点,效率,然后是所谓的全球变暖参数:它有多环保。类似地,当你用冷板冷却时,我们有单相和两相。两者都有优点和缺点。

DM: Sure, but before I provide Cisco's view or my personal view, I want to just walk you through a high-level what are the different technologies that are getting traction. So, if you look at it from a liquid cooling technology, that is being adapted for data center, both in compute and some of the high-end networking gear, we can classify them into two major types. One is immersion-based cooling, the other one is cold plate-based cooling. So, immersion-based cooling, again, you can sub-classify that into two or three types: single-phase immersion or two-phase immersion, both have advantages, disadvantages, efficiency, and then what is called global warming parameter: how [environmentally] friendly it is. Similarly, when you go to cold plate cooling, we have both the single-phase and two-phase. Both have advantages and disadvantages.

浸没式的优势在于,它给你带来了灵活性,因为一切都是从头开始的,包括你的数据中心设计,硬件的设计方式,你不需要适应一些遗留的东西,并试图让它工作。因此,你得到一个更好的PUE因子,电力使用效率。而一些冷板,它的部署方式,更多的是一种改进现有数据中心的技术。所以,我们听到一些客户要求浸没式。我们做了一些部署,但不多。这并不完全是在计算机领域,而是在网络领域,思科已经尝试过了。

The advantage with immersion is it gives you flexibility because everything is [built from the] ground up, including your data center design, the way hardware is designed, you're not adapting to something legacy and trying to make it work. As a result, you get a better PUE factor, power usage effectiveness. Whereas some of the cold plate, the way it's been deployed, is more of an evolutionary technology to retrofit in the existing data center. So, we have heard several customers asking for immersion. And we have done some deployment, but not a lot. That's not exactly in the compute, but in the networking space, that has been tried out at Cisco.

但我们现在所追求的是,我们知道浸没式最终会实现。但是对于我们来说,要发展成液冷,我们关注的是冷板液冷。原因很简单:我们的客户基础想要适应现有的数据中心基础设施,他们不可能在一夜之间改变一切。

But what we are right now going after, we know immersion eventually gets there. But for us to evolve this into liquid cooling, we are focusing on cold plate liquid cooling. The reason is very simple: our customer base wants to adapt this to the existing data center infrastructure, they're not going to change everything overnight.

我们研究这个的方法是,即使在冷板冷却中,我们也把它分为闭环冷却,和开环冷却。在高层次上,什么是闭环冷却,这意味着你有一个现有的机架,服务器上的所有东西,整个液体冷却都是内置的,密封的。即使有泄漏或任何事情,它应该是独立于那个特定的机架,它不应该溢出到其他机架。那样的话,就像今天部署其他机架服务器,或者刀片服务器一样。所以他们没有改变基础设施,他们保留了资产。

The way we are working on this is: even in cold plate cooling, we have classified that into a closed loop cooling, and open loop cooling. At high level, what is closed loop cooling, that means you have an existing rack, everything in that 1U of your server, the entire liquid cooling is built in and sealed. Even with leaks or anything, it should be self-contained to that particular rack, it should not spill over to other racks. That way, it's like deploying any other rack servers today that they had, or a blade server. So they're not changing the foundational infrastructure, they're retaining assets.

这也有其缺点。正如你所看到的,我们必须在当前的物理范围内,在实际应用方面,我们能做些什么,关于液体,我们如何建造散热器,冷凝器,和泵系统。它们都必须小型化,都必须是现场可更换的。这是我们追求的一项技术,它为他们提供了一条前进的道路。

That has its own drawbacks. As you can see, we have to work within the current envelope of physics in terms of real estate, what we can do with respect to the liquid, how we build the I would say a radiator, a condenser, and the pumping system. It all has to be miniaturized, it all has to be field-replaceable. And that's one technology we are going after, so that it gives them a path to move forward.

我们追求的第二项技术是开环制冷,不是把它建在一个1U的服务器上,在你的42U机架上,我们提供一个叫做冷却分配单元的单元,或者CDU,它是整个机架的一部分;机架最终也由思科通过你的系统管理。你的配件,热管和冷管,这是即插即用的配件,你可以将它连接到机架服务器,这样你有冷液体流入,热液体从服务器流出,不停的循环来冷却服务器。这是我们现在正在追求的两项技术。

The second technology we are going after is called open loop cooling, where instead of building this into the same 1U server, for your 42U rack, we provide a unit called Cooling Distribution Unit, or CDU we call it, that is part of the entire rack; you put that, the rack is also managed by Cisco eventually, through your management system. From there you have the fittings, the hot pipe and cold pipe, which are plug-and-play kind of fittings, you can connect it to your rack servers, so that you have cold liquid going in and hot liquid coming out of your server, get recycled, and it goes back into into the server to cool this again. These are the two technologies we are going after right now.

MS:再一次,我们提到了这些系统是由游戏发展而来的,但这并不能真正激发企业的信心。你认为情况变了吗,人们对系统有了更好的了解吗?数据中心运营商,数据中心用户,保险公司——因为有人需要为该设施投保?通过不断的积累,我们现在对这个有了更好的理解吗?我们更信任它了吗?这就是为什么,慢慢地,但肯定地,这项技术在10年之后终于得到了人们的拥抱。

MS: And again, we mentioned that these systems really grew out of gaming, and that doesn't really inspire enterprise levels of confidence. Do you think that the situation has changed, that people understand the systems better? Data center operators, data center users, insurers - because somebody needs to insure that facility? Do you think at all levels of the stack, we now understand this a little bit better? We trust it a little bit more? And this is why, slowly but surely, this technology is finally getting embrace after 10 years of you know like...

DM:你完全正确。我现在正在部署一些数据中心,让我看看我是否能快速提供它。BIS有基于英特尔和AMD部署的市场研究数据。目前,数据中心已经部署了价值约14.3亿美元的液冷技术,他们预计2021年至2026年间的复合年增长率约为25%至30%。为什么会发生这种情况?第一,性能要求,工作负载要求驱动更高的核心数,更高的内存密度GPU,对于其中一些工作负载,没有其他简单的解决方案,你必须适应液体冷却。因此,正如我前面所说的,出现了几种技术。特别是,解决方案的可靠性和可用性有明显改善,我的意思是,如果你看一下新(冷却设备)来帮助解决了泄漏,性能,即使它泄漏,它不应该破坏,也不会损害其他服务器或其他基础设施在货架,它会消失。这就是他们正在研究的技术。因此,没有附带损害,这是可以确保的一件事。

DM: You're absolutely right. This is now being deployed - I have some data here, let me see if I can quickly provide it. There is market research data from BIS based on both Intel and AMD's deployment. Currently it's about $1.43 billion of liquid cooling technologies being deployed in the data center already, and they are expecting a CAGR of about 25 to 30% between 2021 to 2026. Why is this happening? One, the performance requirements, the workload requirements are driving higher core count, higher memory densities, GPUs, and for some of these workloads, there is no other easy solution, you have to adapt liquid cooling. As a result, as I stated previously, there are several technologies being brought out. And especially, the reliability and availability of the solution has improved significantly, I mean, if you look at the new [cooling equipment] that is coming out to help address the leaks, the performance, even if it leaks, it should not damage, or it will not damage the other servers or other infrastructure in the racks, it will evaporate. That's the kind of technology they are looking at. So as a result, there is no collateral damage, that's one thing that's being ensured.

第二件事是管道,冷凝器,散热器,或者泵是如何设计的。它内置了并行机制,即使其中一个失效,另一个也会接管,即使两者都失效,一些冷却机制会起作用,因此,你的系统并没有死。

Second thing is this how the piping, the condenser, or the radiators, or the pumps are being designed. There are parallel mechanisms built in, even if one fails, the other takes over even if the both of them fail, some cooling kicks in, and as a result, your system is not dead.

我想说的第三点是,当我们说液体冷却在这个进化中,无论是用冷板闭环还是开环,我们并没有使它100%的液体冷却。这是一种混合技术,你仍然有风扇旋转,与液体冷却结合。我觉得这更像是协助。风机为辅助,液冷为主。但你可以想象,如果有一个或另一个失败,有某种机制放入,让你可以蹒跚前进,并提供一个服务窗口给你的服务提供商或数据中心运营商去解决这个问题。其中一些技术为数据中心运营商开始采用这些技术提供了更好的信心。

The third point I would like to make is when we say liquid cooling in this evolution of either closed loop or open loop with cold plate, we are not making it 100% liquid cooling. It is a hybrid technology where you still have the fans spinning, in conjunction with the liquid cooling. It's more of an assist, I would say. Fan is the assist, liquid cooling takes over majority. But as you can imagine, if there is one or the other failure, there is some kind of mechanism put in so that you can limp along, and provide a service window to your service provider or data center operator to go address that. Some of these things provide a better confidence to data center operators to start adapting these technologies.

MS:这听起来非常积极。是的,你完全正确。感觉这项技术得到了更多的认可。现在另一个明显非常有影响力的方面是行业朝着更可持续的方向发展,对吗?你提到了其中一些目标。这些目标非常宏大。其中一些是2030年的目标,而不是2040年的目标,显然这是一个非常快的时间表。我的最后一个问题是,你认为这种更可持续的做法会有利于液体冷却的采用吗,那些可能因为其他原因而没有采用的人会想:它会帮助我们让我们更可持续吗?它会降低我们的电费吗?它会在企业社会责任报告上看起来很好吗?

MS: That sounds very positive. And yes, you're absolutely right. It feels like there's more recognition for this tech. And another aspect that is obviously very influential right now is a move towards more sustainability in the industry, right? You've mentioned some of these targets. These targets are very ambitious. Some of these are 2030 targets, not 2040 targets and obviously it is a very quick timeline. So my last question is, do you think this drive towards more sustainable practices will benefit liquid cooling adoption, that people who have perhaps been putting it off for other reasons, are going to be like: Is it going to help us help make us more sustainable? Is it going to cut down our electricity bills? Is it going to look good on the Corporate Social Responsibility report?

DM:当然,我的意思是,我会给你一个非常简单的答案。如果你看这个2U的机架服务器,我们在42U机架中至少装了18到20台这样的服务器。以目前的CPU、内存和GPU计算,风扇消耗了这台服务器10 - 15%的能量。所以它可以在175到250瓦特附近的任何地方。现在,如果你有20台这样的服务器,你谈论的是大约4000瓦,4千瓦。我们能做些什么来减少这种力量?

DM: Certainly, I mean, I'll give you a very simple answer to this. If you look at a 2U rack server, we pack at least 18 to 20 of these in a 42U rack. With the current CPUs, and memory, and GPUs, 10 to 15% of this one server [power] is consumed by fans today. So that that can be anywhere in the vicinity of 175 to 250 watts. Now, if you have 20 of those servers, you're talking about something in the vicinity of 4,000 watts, four kilowatts. What can we do to reduce that power?

根据我们所看到的,基于2U机架服务器上的一些测试和计算,我们看到英特尔和AMD最新的最好的CPU,我们可以带来近60%的…采用一种液体冷却技术来改进。这是什么意思呢?这意味着我可以减少100到220瓦的风扇功率,通过使液体冷却,最大5到10瓦,来做泵之类的。液体冷却也需要一些动力,因为我提到过,要么我必须有一个冷却分配单元,它有一个巨大的散热器和冷凝器。它们都需要马达来运转,还有电子设备。但是你可以看到能量的减少,60%是一个很大的数字。

What we have seen, based on some of the testing and the calculations that have been done on a 2U rack server with the latest greatest CPUs we are seeing from Intel and AMD, we can bring almost 60% [...] improvement by adopting one of the liquid cooling techniques. What do I mean by that? That means I can probably reduce 100 to 220 watts of fan power by enabling liquid cooling to probably five to 10 watts max, to do the pumps and whatnot. Liquid cooling also requires some power, because as I mentioned, either I have to have a cooling distribution unit, which has a huge radiator and condenser. They all need motors to run, and then there is electronics. But you can see the reduction in power, 60% is a big number.

现在有客户问为了实现他们的可持续发展目标,我可以使用液体冷却吗?不仅仅是为了获得最高端的TDP,即使是在一个现代的TDP系统中,我可以部署液体冷却并降低机架单元的整体功耗吗?所以我们正在认真地研究,你会看到在未来几年内它将被启用。不仅是为了实现高端TDP和高端GPU,而且还有助于实现这些可持续发展目标。它会发生的。

There are customers now asking in order to reach their sustainability targets, can I deploy liquid cooling? Not just to get the highest-end TDP, even in a modern TDP system, can I deploy liquid cooling and cut down the overall power consumption by the rack units? So we are looking seriously, you will see that is being enabled down the line in a few years. Not just for enabling the high-end TDP and high-end GPUs, but also help bridge or meet these targets for sustainability. It is going to happen.

MS:这是一个积极的信息。如果该行业能够减少电力消耗,改善碳足迹,那么每个人都是赢家。所以,谢谢你的深入观察,这是有趣的,有教育意义的,有意思的,祝你儿子的电脑处于最佳状态。我希望这是一个有荧光冷却剂的系统,你需要给它加满,因为那些太漂亮了。这绝对是我的荣幸。希望我们能再联系你,但现在,祝你工作顺利。

MS: And that's a positive message. And if the industry manages to cut down on its [power] consumption, improve its carbon footprint, everybody wins. So thank you for this in-depth look, this has been interesting, educational, entertaining, and good luck with your son's PC. I hope it's one of those systems with with fluorescent coolant, where you need to top it up, because those are just beautiful. It has been an absolute pleasure. Hopefully speak to you again, but for now, good luck with your work.

深 知 社

翻译:

Plato Deng

深知社数据中心高级研究员 /DKV计划创始成员

校对:

Eric

DKV(DeepKnowledge Volunteer)计划创始成员

公众号声明:

本文并非原文官方认可的中文版本,仅供国内读者学习参考,不得用于任何商业用途,文章内容请以英文原版为准,本文不代表深知社观点。中文版未经公众号DeepKnowledge书面授权,请勿转载。

,

免责声明:本文仅代表文章作者的个人观点,与本站无关。其原创性、真实性以及文中陈述文字和内容未经本站证实,对本文以及其中全部或者部分内容文字的真实性、完整性和原创性本站不作任何保证或承诺,请读者仅作参考,并自行核实相关内容。文章投诉邮箱:anhduc.ph@yahoo.com

    分享
    投诉
    首页