Struggling to keep up with the evolving demands of Data Centers? Join Max Clark, Founder at ITBroker.com, in a revealing conversation with Jay Smith, Vice President of Business Development at Evocative.
We explore the evolving world of Data Centers, the unique approach of Evocative, and the intriguing shift towards Bare Metal. With over two decades of shaping the Data Center landscape, Jay offers unparalleled insights on customer demands, AI and ML trends, and some bold predictions you don't want to miss! What's your take on the Bare Metal vs Cloud Debate? Share your thoughts in the comments👇
Max:00:09
I'm Max Clark. I just finished recording with Jay Smith of Evocative Data Centers. Jay's been in the data center business for over 20 years and has seen a lot in that span of time. We talk about data centers, how Evocative is building, what they're looking for in facilities, how they're approaching customers, what they're seeing in terms of customer demand between traditional users and the AI and ML explosion and explosion of density requirements comes along, how evocative is pushed into bare metal, how customers are consuming bare metal, cost structures related to data center bare metal over cloud, and some interesting predictions at the very end. It's worth hanging around and and catching the the tail of this.
Max:00:49
Hope you enjoy it. When this is a tech deep dive, I'm joined with Jay Smith, who is now the VP of business development, with Evocative Data Centers. Jay has been involved in the data center world for a long time. I'm not gonna call you out directly, Jay. I'm gonna let you call yourself out for how long you've been in this crazy business and this crazy world.
Max:01:07
And we were chatting a little bit beforehand about how you've also transitioned out of an operational role into the other side of the house, which is good and bad. But first and foremost, thank you for volunteering to do this and for the conversation.
Jay:01:19
Thanks for having me. It's, it's great to reconnect and kinda talk through things, how they are now versus the way they were 2 decades ago.
Max:01:27
So 2 see, there you go. 2 decades.
Jay:01:29
I'll leave it at that. It it's actually probably closer to 3, but we'll leave it at 2.
Max:01:33
It's it's, it's longer than 2 decades, Jay. I've known you for longer than that. So let's start with a background a little background on Evocative. And, actually, we can kinda go back to inception. Right?
Max:01:43
How you got into data centers and how that has transformed along the time into what Evocative is now. So we don't need to hit every highs and lows, but that means we're gonna start back in the nineties sometime probably.
Jay:01:54
Nineties, I went to work for a very old friend, family friend. And, for the next 21 years, he and I ran a little boutique colo data center in Southern California with expanded into Phoenix and Dallas, and he moved up to retire. And so we sold the entity to Evocative, and we were part of a series of acquisitions that took place over the last 7 years. And I came on board, and we, we're trying to we kept growing the company and through a series of additional acquisitions up through last year. The biggest one being when we acquired the colocation of data center assets from Internet.
Jay:02:32
You know? I did operations and kept rolling through and doing salesy things on the side and helping the sales team and pivoted out of that role and into business development really last year. So it's been roughly 12 months now. I've been in the seat. It's we keep growing.
Jay:02:46
We keep finding new data centers and expanding our operations and adding staff, new facilities in new markets and same markets we're in.
Max:02:55
So in the course of the last 25 years from from your seat from an operator, there's a companies out in market building data centers, and these are the hyperscaler I I really don't like that term, but these are the hyperscale campuses where you see a 100 plus megawatt projects going in. But what surprises me always is the longevity of data center assets. And facilities that were built in the nineties are still running today. And I I know of several that actually have my blood on our floor tiles. And we're we did the initial installations in the late nineties, and they're still going.
Max:03:27
They've traded names a bunch of times, but they're still there. So how from an operator side, and now as you're looking at acquiring facilities, how do you think about this market and what are you really looking for and how do you evaluate geographies and facilities? And what makes an an interesting acquisition target? And how do you how do you navigate these waters? Because it's you're not in the business of building data centers.
Max:03:47
You're in the building business of really buying and running data centers.
Jay:03:50
Oh, I would say in my pivot last year, right, I stepped away from that role. And I'm not privy to what's in the pipeline per se. In this day and age, we're we're looking to add to the portfolio, and we're focusing on not just facilities that we can acquire that have a base, but that we can expand in whether it's from a real estate perspective or a power perspective or a retrofit perspective. So that's kinda there are there are a lot of different options available in the market. And, again, I'm I'm I'm sort of out of that role now, so I really don't know what what's what we're really after.
Jay:04:24
But I know the team is working to add to the portfolio that we have today.
Max:04:29
Data centers I mean, so there was a big push to build data centers in late nineties. .Coms, really this expansion of the Internet. That tracked forward into mid 2000s, and then we had the dawn of AWS and really this this idea of of, what we what we see as cloud computing today. Right? And then around that time, there became a lot of predictions from certain people of data centers were dead.
Max:04:53
And it doesn't feel like the data centers are anywhere close to being dead in the 15 years since that prediction started and and, quite the opposite almost.
Jay:05:03
We I'm still here. Right? I'm still doing what I did for the last 25 years. So I I must be doing something right or or not everybody at the time was really looking to pivot and move to the cloud. Like, they everyone said that data centers are dead or, oh, you're gonna be out of business in 5 years and 2011, 2012.
Jay:05:21
12 years later, I'm still here. So must be doing something. Right? And not everybody really wants to go to the cloud, and we see even back then, I recall a client I had that they stood up a bunch of stuff in AWS
Max:05:31
at the
Jay:05:32
time, and they wanted to pivot away from it. So they we did they did a lift and shift then. Customers and and end users now are are doing the same. They're seeing cost savings. So I got some data right that part of it's cost predictability.
Jay:05:46
2 thirds of those companies surveyed surveyed by 42 1 Research in December of last year 2022. Reported being billed higher than they expected. 47% said it was 10 to 20% higher than they expected. 25% said it was more than 30% higher. So folks are shifting loads out of the clouds and into the cola model or even in the bare metal model, mostly for cussing.
Max:06:07
So early 2000, we were seeing data center density footprint densities, 50 to 75 watt per square foot. Felt like most most facilities were built for.
Jay:06:16
Yeah. I I usually use the the the number of 80. But Fast
Max:06:19
forward, it was the the new sexy to say 100 watt, 150 watt, 200 watt, 250 watt. And, like, the the density per square foot started increasing pretty significantly. And and at the same time, I can remember late nineties, early 2000, late nineties, I worked for a dotcom, and we had a half a rack of servers colocated, and it felt like a lot. And there were people in the facility that had 20 racks. That was just it was the most equipment you could ever imagine in your life, like, looking at it.
Max:06:45
And that trend seemed to go up a lot. 100 of servers, thousands of node deployed into public cloud, massive consumption of public cloud in terms of virtual cores. And then now we're seeing deployments coming back into data centers, but also still very large and very dense, sized in thousands of cores, terabytes of RAM, I should say multiple terabytes of RAM, different disconfiguration. How when you're talking to customers and they're evaluating these things, and and let's talk about, like, that shift back from cloud into into physical equipment in a data center. What are you guys seeing and what are you seeing in terms of that conversation around density?
Max:07:24
And are people still chasing this? We wanna put 33 kilowatt in a cabinet because that's just what we believe we're supposed to do. Is that or or or how is that trending for you guys? And how are you how are you managing that with your customers?
Jay:07:35
I wanna bifurcate that. So there's 2 different kinds of builds now in my view. There's traditional small to medium sized business to upwards of large enterprise that are they're looking to to deploy their infrastructure back in, again, the early 2000s. Right? The usual MO was it's roughly a 3 kilowatt built.
Jay:07:54
3 kilowatts per capita. Over the years, we've seen that, and I'd probably say it's about once a decade, a little bit of a jump to 5 to 6 kilowatts cabinet. And now it's the standard is typically 8 and a half kilowatts per rack. So you make the assumption of that's a a shift from 80 to 1 to 125 and then from 125 to 150 watts a foot. We're seeing new end users that are looking for these AIML builds, and those are really the ones pushing towards 35 to 50 kilowatts rack.
Jay:08:23
We're hearing conversations where folks are asking for a 100 kilowatts rack, and those are special builds. We've got a new call that it's called the Eaglepod. Can't really dive too deep into it, but, effectively, it's a block of 20 racks or so and 24 racks, I think, with that megawatt capacity built in. This is probably a deeper conversation we have on a maybe another podcast down the road. But, again, that's kind of the big shift.
Jay:08:46
The reasoning they want to drop these high density cabinets together has to do with latency between the nodes. So they need they need to be within so many feet of each other for latency reasons. So it's 4 meters. But, again, that's a that's a very big deeper dive, and we can run we can run this call. We can run this pod for 3 hours talking about this.
Max:09:07
Well, I mean, we might just completely sidetrack onto this. I mean, I'm I'm immediately thinking about the old Cray computers where they had a crescent shape configuration specifically for wiring between a compute and storage resources. And it's it's it's just it's like everything that's old is new again at some point. It's just how much time has to elapse before it comes back.
Jay:09:25
Right. The old again, back from the old client server model. Right? We're in some way shifting back to that.
Max:09:30
How much how much of customer demand I mean, so okay. One megawatt and 20 to 24 cabinets, it's a lot of power and not a lot of space. And just knowing GPU consumption and knowing what goes into these things, I I get it. I understand where these things are coming from. This isn't like you're collocating and calculating 200 watt per CPU kind of builds.
Max:09:50
Right? Like, that's a that's a completely different animal. How much of data center demand are you seeing today coming from the traditional space as you said, like like SMB enterprise repatriating, also don't like that term, but I'll use it, repatriating cloud workloads into a data center setting of some sort, whether that's them buying equipment or going into and looking at bare metal or computationally intensive workloads is AI and ML workloads coming across. And let me give you the second prompt with that one, which is how much of that do you think is driven purely by a cost, and how much of that do you think is driven by performance or configuration of being able to gain access to things within the cloud versus going out and and putting it into a floor, onto floor
Jay:10:30
space? So just to kinda thought process here, what from what I'm seeing on the on the request front, it's probably 60% traditional colo, as I call it, maybe closer to 70. We're looking at large builds for large end users that are looking to get into the AI game, if you would, or to build their own AI clusters. 30% is probably a bit on the higher side. It s probably closer to 25, I would think.
Jay:10:55
But their repatriation is real. Like, it's and, again, I don't like that term either. Right? But it kinda fits.
Max:11:01
Don't swim upstream against marketing. Just just accept it and kinda keep going.
Jay:11:05
We've traditional colo doesn't fit for an AI world. It really doesn't. The push is to pack more into a rack and there's physics. Right? Like, the physics physics wins.
Jay:11:14
It's the reality of it. So some of it's gotta be air cooled. Some of it's and that's typically on your network side. The push is really gonna be eventually to be liquid cooled. I know that's weird and and those of us who've been doing this a long time, like, always wanna fight the fact that you wanna stuff water into a piece of electronics, but it is.
Jay:11:32
It's real. And it's simply a it's simply a physics game. So it's we wanna pack as much as we can in a cabinet and we need to cool it. And the best way to do that is liquid chip. So we're going to get there, but we're already there really.
Jay:11:46
But that's bleeding edge tech is what's driving.
Max:11:49
But that liquid cooling isn't necessarily a design that I mean, how much of that is a design that impacts data center and your footprint as a whole? Or how much of that is just an application buying a specific thing and going out and getting a cabinet with with compute blades or or GPU blades with liquid cooling into it? And and now as a data center operator, you have to know, like, okay. There's nonconductive liquid material on these cabinets over here in the corner, and be careful with them because you don't want that stuff on your floor.
Jay:12:15
They're you're not gonna at least we are not going to, on a traditional data center floor, are going to run liquid to the rack. That's the reality. These are gonna be purpose built, designed with the expectation that the end user understands that there's water on the floor. Right? This isn't just chilled water rolling underneath the cabinets.
Jay:12:34
It's in in large piping that's got sensors and stuff. We're literally plumbing it to top of the cabinet. And the hoses plug in there, and the hoses plug into the machines, or the hoses plug into the rear door heat exchangers. And you're you're effectively grabbing the heat at the source and pulling it out of the rack and doing the heat exchange there. So they're gonna be purpose built.
Jay:12:51
We're extremely nervous about blending that with a traditional coal environment as we should.
Max:12:55
Yeah. Everybody asked me I mean, after after running data centers in Southern California, it was always the, are you afraid of the earthquakes? And and my answer was no. I'm I'm afraid of smoke and water suppression firing off in these buildings. And the good old days of inert gas have long since left, and everything is most building code requires you to have water in your data center.
Max:13:09
The pipes should be dry and all the other good fun stuff that comes with it, but I don't wanna experience a water suppression system going off in a data center ever. So so did am I right? Did I write this down? Because you started by saying 60, 70% traditional, but then large builds with AI and then maybe 25 to 30% repatriation is is the focus. So majority of demand at this point solely into traditional customers looking into AI and ML workloads.
Jay:13:39
Again, I I don't have raw stats. I'm just going off of, you know, what my my feelings are. Right? Like, I think maybe it's a little bit different. Maybe it's probably 60 to 70% traditional, 15% repatriation, and another 50% in AI.
Jay:13:51
I think we're seeing more opportunities than we have in the past.
Max:13:55
What's amazing about that number is how high the traditional slice actually is within that. I mean, from a cost standpoint and from a cost envelope standpoint, delivering 30 to 50 kilowatt per cabinet, even if the demand is a much smaller percentage of requests, I mean, the capacity and the actual consumption is so much higher. It's it's just it's amazing how much INML really drives that. And also, I mean, somewhat geography specific to industry, but also not necessarily industry specific anymore. I mean, this isn't just crypto miners going out and putting GPU farms out to try to mine cryptocurrency.
Max:14:30
You're talking lots of different segments. Have you noticed are you guys starting to focus on any specific verticals as a result of this, or are you seeing more demand coming from any specific verticals?
Jay:14:41
Well, obviously, the the generative companies are out there hunting. And the challenge is I think we saw through COVID, there was a a shift from when when everyone came to the conclusion that everyone needed to work from home. We saw a big push of what we need to pull our machines and resources out of our offices and into the data center or into the public clouds. Then as things sort of cooled as COVID lessened, we're we still live in a COVID world, and I think we're gonna live in a COVID world or a flu world or whatever you wanna call it for the rest of our days. Now we have a sort of a hybrid model where there's folks in the office and folks working remote or a combination of both.
Jay:15:16
And so the the shift is now to right size what we have. Right? It's do we run it in the cloud? Do we run it in the data center? Do we do a bill little bit of both?
Jay:15:27
And with the generative AI groups, they're looking for, this is the latest and greatest. So to kind of package that all up, you saw a big push into acquiring data center space, whether it's hyperscalers or we call it the the hollow model, right, where it's colo and bigger bigger chunks. Right? North of 300 call it 300 kilowatts to a megawatt. And the hyperscale data centers were running around grabbing up every chunk of real estate, every generator, every piece of switch gear, all the cooling tech.
Jay:15:56
And so through really through 2022, a lot of the available data center space got gobbled up. Now there's a big push in AI, and everything's like, well, we need to get 35 to 50 kilowatts of rack. Well, there's not a ton of facilities that are capable of handling that. So now you need to go in and retrofit what you have or or build new. And it's at one point, I think it was 64 months to be able to get a generator.
Jay:16:20
So now all the space that's gonna hit the market in certain areas, notably Dallas, I think it's already locked up. So even stuff that isn't finished being built today is already consumed. And then you have the AI push, and everyone's like, well, I need their kilowatts of rack, and there's no data data center space available for them. So now we're we're seeing, quote, unquote, retrofits or redeployments or what have you. And it's gonna be interesting in 2024 as to what's available and where and who's gonna win.
Jay:16:48
And
Max:16:48
I think this conversation is a huge disconnect between operator, large consumer, and let's say, average enterprise buyer that maybe isn't actively in the market doing these things. And you you see we we get we get capacity notes from facilities. Pick a market. Oh, we've got x capacity coming online on this date in this market, and we're we're we're selling into it now. Right?
Max:17:13
Maybe this is 6 months ahead, a year ahead, 18 months ahead, whatever it is. And and it's it's really interesting to see the end user reaction to that. And in some cases, almost like disbelief that we need to plan ahead this far with data center and data center consumption. And I've seen this now a bunch of times where, especially in really popular markets or in really popular facilities, an end user customer goes to expand and say we wanna increase our density or add cabinets or do these different things to it and and turns around and finds out the facility's full. There is no power left in the facility and then they have to figure out what to do from there, which which which can create its own set of kind of I mean, that when that wheel starts rolling downhill, I mean, it it crushes all sorts of things.
Max:17:54
How do you how do you work with end users as you're going through this and looking at these facilities and and trying to plot capacity and demand. And years ago, it was popular for you to go out and try to negotiate a a ROFR on space, right, or ROFR on power. What's what is what is the, like, right way for a customer to work with you and look down the horizon? Like, how many years out should they be looking? How do they track it?
Max:18:21
How do they like, what's what's in your view as, like, best practices related to this?
Jay:18:26
I used to have catch up calls with clients or we call them QBRs. Right? Quarterly business reviews, whether it's semiannual or maybe annual, depending upon the size of customer. And part of that is just checking in on making sure things are pulling along with they they happen. Right?
Jay:18:40
And they're happy or they have challenges with things or but, really, it's us checking in on them on, hey. Where is your business at? Where are you looking to go? Where are you looking to grow? And that would take place, I would say, 10 years ago and just find out, hey.
Jay:18:53
Figuring out what your plans are long term. Right? So you're sort of trying to space plan what I have, and does it make sense to expand the cage or cabinets or that do we need to look into maybe finding them additional cabinets adjacent, so forth? Now we need to be having these conversations more frequently simply because there's a such demand for capacities in our facilities that if I I don't wanna have to say, sorry, we don't have capacity or we didn't plan for that. Right?
Jay:19:20
And so, you know, recently, we had a facility in Silicon Valley that it's full, And we're gonna have to add additional capacity to the to the data center just to support existing clients in their in their workloads, in their growth. And that's 6 months, 9 months, 12 months down the road, but they've already committed to it. So here we are. Right? It's facility in 3 or 4 of our facilities in key markets are they're at capacity.
Jay:19:44
So it it's difficult to have those conversations with end users. Like, well, I thought I'd be able to get another rack on the road. As far as a rover, depending upon the facility, sure. We'll we'll we'll provide them an opportunity to be able to grab another rack in some type of period of time, but at the end of the day, that typically gonna get called earlier than the end user wants. So, again, it's not a fun conversation to have, but it happens.
Max:20:05
How how does bare metal impact this? Are you seeing traditional users leaning into bare metal, or is bare metal dominantly a a repatriation play and and
Jay:20:16
I can sum of both. We I've got a client that we had conversations with starting last February of last year, and it was on a business review. And we talked about their renewal that was impending, and the conversation said they've got some workloads that are running in the hyperscale clouds, and they're looking potentially to save money on what that costs. And the so the renewal gets done, but we've already worked 3 quarters of the way down the road on, hey. We wanna we wanna we wanna proof of concept on seeing if we can run-in your bare metal world and what that really looks like from a cost savings perspective.
Jay:20:52
And it started at it it the proof of concept just really kicked off, but the conversations are that that's gonna grow potentially 10 or 20 fold if they can get their applications to run-in that. And, again, they they run-in it in Kubernetes, and it's a little easier for them to pull out per se than some other folks, but the the math works. It's it's a 40% savings. So you you again, business reviews are key with whether it's with us or with with whoever your your operator is and checking in periodically to figure out where you're at, where you're looking to go.
Max:21:24
Let's dig into this. So bare metal now means different things to different people. I mean, my favorite thing with tech is it always so many things get wedged into terms. Right? Clustering.
Max:21:32
What is clustering and can you define it? And the answer is no. Of course not. So what what does bare metal mean with Evocative and how do you guys look at bare metal with your customers?
Jay:21:43
So, again, it's similar to the to the public cloud world. Right? It's a a block of hardware that's allocated to you. It's got so many CPUs, so much RAM, so much disk per machine. I guess the major difference is it's not as elastic as what you see in hyperscale clouds, but it I think it defines the block of resources you have available and creates cost certainty.
Jay:22:06
So, again, you have to size it to be appropriate to what your max capacity would be, but but it also means you're not gonna have the the increases in costs that you would see in a cloud environment. So the notes I have on this said 47% of companies surveyed by 4 51 Research in December of 2022 said that their cloud costs were 10 to 20% higher than their budget. 25 percent said it was more than 30% higher. So is it it's a control thing, really. It's controlling your costs.
Jay:22:35
Do you have a bunch of shadow IT instances that are getting spun up because an engineer needs a lot? I need just a test bed. And so I I spin up another DevOps box and with some resources, and they forget to turn it off. Or is it egress? Right?
Jay:22:50
I think that's a big push that folks don't really understand. When we're pulling data out of the cloud, how much does that impact our cost? So to me, it's you're we're defining blocks of resources that are available to you. It is not elastic as folks would want it to be, so you have to sort of budget and plan appropriately. Not just dollars, but resources.
Jay:23:09
We have 3 really different builds called small, medium, large. We're trying to keep it simple. Right? Small box comes with x. VCPUs, x amount of RAM, x amount of disk.
Jay:23:18
Medium is chunky more. Large is a little bit more than that. There's an extra large build. It's usually on an on an individual case basis. So, again, these conversations come up, and and you sort of have to you have to size it appropriately.
Jay:23:30
Takes a little bit of work. But, again, customers are gonna be have more defined costs, and they don't necessarily need to to worry about, you know, cost overruns or their budget going out the window.
Max:23:40
So the dedicated server world of yesteryear, providers were giving you an operating system on top of a machine. And, hopefully, that machine was actually a dedicated box. How deep are you getting into management on bare metal? Or and I because I am I'm asking this question because I find a lot of enterprises that are thinking about going down this road, but have never actually operated physical machines in a data center before and are concerned about what the change is gonna mean for them and how do they actually take care of these boxes. It's easy to say log in to an AWS console or fire off an API request and provision an EC 2 instance.
Max:24:13
It's a little different when you start saying, well, how do we how do we install an operating system on a physical server now because we've never had to do that before.
Jay:24:20
Typically, you give us the build on what you want from a hardware perspective and tell us what OS you wanna run. Right? And then there's a build sheet that we provide back to you that tells you how to log in the box. You obviously have your password and do whatever, and you're off and running. Right?
Jay:24:33
If you wanna install a different OS version, that can be done via out of band management, but it's really simplistic. It's not as complex per se as the way years ago when you and I would log into a Linux box and have to turn everything off and configure everything and hand it off to a customer. And I recall a conversation from 2003 when end user were asking for VPSs, and that's literally I don't even want again, that's back early early cloud work. Right? Like, that's I I just I just want a Linux instance running on a machine, and I don't wanna have to get hardware.
Jay:25:05
And I think the cloud from a staffing perspective got a lost generation of folks that really haven't rolled up their sleeves and dug into hardware and dealt with it. They a lot of folks and and not being critical of them, they came out of college and they just started coding into new different cloud operators. And the idea with bare metal is it bridges the gap because you don't necessarily need to know about hardware, but at the same time, you're it it it fills the it fills the gap between what traditional colocation is and what running in a hyperscale cloud is.
Max:25:33
So let's let's say focused on let's say focused on the let's the repatriation repatriizations repatriation side. Side. Well, I can't even say the word. I don't like it so much. The repatriation side of this conversation because I think that's that's it's an interesting conversation to have.
Max:25:49
So enterprise turns around and they realize, they've got this giant AWS environment or GCP environment or Azure environment. Right? They've got a giant cloud environment. What are you hearing as the the triggering conversation or the outcome that people are pushing for? Like, it's not these things don't just happen because somebody wakes up one morning and says, I've decided we're gonna close our cloud platform and go on to physical hardware again because it sounds fun and I wanna put it on my resume.
Max:26:13
Right? Like, that's not usually the direction those resume bullet points wanna go. So what are you hearing from people in terms of driving this? Is this solely I mean, this is this is firmly a cost thing where people wake up one day and just say, we're spending way too much money here, and what do we do about it? Or are other things afoot?
Jay:26:29
Mostly, it's a cost based thing. Right? If you look at what the artist formerly known as Twitter, right now, x has done, They came in, and they reduced their monthly cost by 60%, pulling out of the data centers they were in and things out of the cloud and moving it to where they had more control, a Basecamp. They've been very public out there, what they've done, shifting back from the cloud to the colo model. They're looking at savings, and and this is well, and you can find it, but they're budgeting that it's gonna save them $7,000,000 over 5 years.
Jay:26:56
I mean, these are these are real dollars. There's the we want more control of our data folks. They want they wanna understand where their stuff is. There's the security folks. They are concerned with security of their environments and need to have more system control of it.
Jay:27:10
So I would say the majority of it is driven by cost. A portion of it is a control thing. We know where our data is. We've been pretty open about your data, your business, your your business, your data. Right?
Jay:27:22
Because, again, have you ever dug in and read into what the user licenses are on anything, or is it literally scroll all the way to the bottom? Like, I don't know. Right? And I'm not here to to, yeah, accusatory of any of them. But
Max:27:35
But, Jay, the cloud is secure. I just have to put my stuff there, and it's I'm I'm perfectly safe now.
Jay:27:41
Hi. We've even seen a push where clients are worried about the backups. Right? And where are they? Like, I think the days of going back to tape are real.
Jay:27:51
Not everybody's doing it, but if your tapes are in a box and whether they're buried in a bunker somewhere or sitting in a safe in another market, that's
Max:28:00
Compliance is an interesting one. You said control of data. When you say control of data, I immediately go to compliance with this in my head. And I and the examples that kinda, like, were flirting around is what countries your end user data can sit in. And this is a big issue if you're if you're if you've got a global platform or a multinational corporation.
Max:28:17
Right? Now we're seeing issues with jurisdictions down to state levels. Right? States are starting to define and get a little bit more specific about what happens with user data and where it sits. So and, I mean, from a compliance standpoint, you said tape drives.
Max:28:30
I mean, immutable backups are a wonderful thing when used properly. Like, it's nice being able to write something out and have it be perfectly preserved and not changeable. And this this wonderful little segment of time that you can rely on if you ever needed it again in the future. And I've seen a lot of that conversation happening as well with enterprises of of looking for that.
Jay:28:50
Took a tour through the data center earlier or not earlier today, but yesterday and walked by, and there's the tape room sitting there with a block of tape sitting in a cabinet, and there's a box where they're rotating in and out. I mean, it's I'm not saying it's gonna go back to the way it was 20 years ago, but tape didn't die. Tape's still there. Media companies love it because, again, they've got a backup and it's sitting on a tape, and they've got multiple versions of it going back years, and they can drop it in the vault or ship it to a safe in another market and it's real.
Max:29:21
Walk me through an engagement for a bare metal. And, I mean, what I mean by that, it's like you haven't let's just let's just use, you have an existing customer. You have an existing customer that has data center, and we're gonna assume that they have a large
Jay:29:33
cloud footprint as well that they
Max:29:33
moved into. And they've, woken up on January 1st and said, we gotta go the other direction with this. And so they call you up. What is how does that how does that process work? What are what are the big milestones with it?
Max:29:48
What are the things that people should be ready for, and thinking about? How long does it take?
Jay:29:52
Typically so, again, I'm rewind to the client I started the conversations with a year ago, and it's really about sizing. That's the first step. Unless you can tell me what you need from a virtual CPU memory and disk perspective, it's it's difficult to really dig in. So whether they're reviewing what they have from a resource perspective or, you know, even looking at an invoice. And so the first step is sizing.
Jay:30:14
Right? Figuring out what you need, what you need to start, what you potentially need to grow to, and how quickly that is. The conversation is okay. What what what do you need from an OS perspective? And is there a license fee?
Jay:30:25
Are you bringing your own or we have to acquire it for you? And then looking at the the economics of it, like how long a term. Right? Do you this is a situation where we're in a month to month contract that's gonna be more expensive, right, per se than a 2 or 3 year term. And then there could be a test bed, Right?
Jay:30:42
Or proof of concepts are there for you to try it out and make sure it's gonna run your utilities that you need to run and or your applications. And and then the engagement is again, this this was a it's a fairly large build, so we're 9 months in to when the conversation started. And there have been in-depth conversations with their DevOps and engineering teams all the way through. Proof of concept launches, we're midstream on that now. And they'll make sure that they they can run their applications the way they want, and then there's a scaling conversation.
Jay:31:12
How big do you want it to get and how quickly? Because, again, it's not elastic. So we're we're allocating blocks and find resources, and we need to be a bit of ahead of the curve, so to speak. As you are intimately aware of, there's not just limitless resources available. You have to actually have to rack them and stack them and cable them up and make sure the network is going and burn them in, and so that takes a minute.
Jay:31:36
So it's not an instantaneous rock and roll. We try to have metal available for clients to be able to spin up quickly, but, again, there's defined resources. There's not a limitless number of servers. And, again, servers, CPUs, and and RAM configurations, they they change, I wanna say daily, but, really, it's more like a quarterly or an annually thing, and customers wanna have the latest and greatest. We allocate 10 to 20% of failover so that an end user can box is misbehaving because servers die.
Jay:32:05
It's not a perfect world. Components will fail. Fans don't spin perfectly. So if, an instance is giving you hardware trouble, then we may be able to reallocate your your workloads to another instance, And that's really it's not as every everyone thinks it's complex. It's kinda simple.
Jay:32:22
This whole business is simple, to be honest with you. I keep saying it all the day. At the end of the day, it's all a math equation.
Max:32:27
It's funny for me to have these conversations because I think about it a lot from now, let's just say, in the in the buyer's position. Right? And so when we start talking about, like, what the buyer is going through with these things, you have enterprises that have never run data centers. You've got a finance team that has never been involved with data centers and hardware cycles. You've got technical teams that probably haven't been involved with data centers and hardware cycles.
Max:32:49
And that's very normal now. I mean, there's a lot of businesses that have started in the last 15 years that just never dealt with any of this stuff. Or a lot of people that came into careers in the last 15 years that just never dealt with this stuff. And it feels it feels it feels overwhelming if you're in that position. And of course, I remember building computers and I won't say when I started doing that.
Max:33:07
And it's it's almost the exact opposite for me. It's like, oh, it's no big deal. Go get a data center and throw a bunch of call up Dell, call up Supermicro, do whoever you wanna do. Right? And put a bunch have them ship a semi truck out there and just put them in racks and turn them on and off to the races you go.
Max:33:20
It's not as simple as that either, but some parts, it pretty much is. I and
Jay:33:24
Well, to you and I, it's not complex because we've done it. Right? It's you don't know what you don't know. Right? And, again, the lost generation and then, again, I'm not trying to be critical.
Jay:33:34
They just they don't know what they don't know. They haven't had an opportunity to deal with it. They haven't we're both probably going to Fry's and buying a bunch of components and building machines when we were very young men. I'm not a young man anymore. I can't speak speak for you, but it's it's scary.
Jay:33:47
Started working
Max:33:47
when I was 12. So I'm
Jay:33:49
I'm still learning. I am. I have still to this day in COM 64 and an Apple 2e and Atari 800. I've got rid of them. Still, they just sit in a box.
Jay:34:00
But, again, it's you don't know what you don't know. And there are times, and and most recently, I've had a couple where there are customers that are they don't they don't know hardware. And they've come in and said, hey. We need help. Can you help us?
Jay:34:13
And we can help to some degree. What racking stacking our engine our data center engineers do it all the time. Customer end users don't. Like, cabling is an art. You know, cabling is appropriately, and it's just spaghetti.
Jay:34:24
All you're doing is taking out servers that's supposed to last you 5 to 7 years and making it cook itself 3, maybe 2. So
Max:34:30
You gave me a stat earlier. And I'm curious. Was this a stat that you've seen that Evocative has seen, or is this a industry stat? And this was the 40% reduction in costs going from cloud to bare metal.
Jay:34:42
That statistic is based upon what I an end user client that I've been working with for the last roughly year on shifting. That's our math based upon what they're paying today in the public cloud and what are the costs to run it in a bare metal instance.
Max:34:56
And and this is this is pre POCs and hardware sizing and validation, or this is post POC hardware validation?
Jay:35:03
Post.
Max:35:04
K. That's a big number.
Jay:35:05
There there's there's more data than than I have at my fingertips to be able to to share divulge, but we have done some additional research. And let me find it.
Max:35:17
I mean, what's reaction that you find when you are in that conversation and these numbers come up? Are people excited, upset? Do they believe it? I mean, there's there's it's it's not a small number. We're not talking about, hey, we're gonna shift 10, 20 percent of your spend.
Max:35:31
When you start talking about we're gonna we're gonna change 40% plus of your spend and your cost to operate this environment, it starts getting into this, like, I don't believe you territory pretty quickly. Well, proof's gonna
Jay:35:42
be in the pudding. Right? So, again, this is gonna be based on how long are you willing to commit to infrastructure. Right? If it's a the shorter the term, the less the savings.
Jay:35:52
It has to do with the size of the instances. So and and, again, the instances you're running today versus running them in a bare metal configuration. Statistically, our product marketing teams think that depending upon what you're paying today for transport of the data out of the hyperscale cloud worlds, it could be as high as 70%. My 40% numbers are conservative. The reality will be in what what really can come of it.
Jay:36:16
Right? What really did it? Does it you won't know until right? So but I've done the math. And, again, this whole business is just a math calculation.
Max:36:24
Data egress from these cloud platforms is such a scam. I mean, it's just it is it's incredible what they're getting away with for and what's even scary about that is knowing that the scale these platforms are running, what their actual buy rates are likely versus what just a large enterprise can go out and acquire Bandwidth. I mean, and and just saying, like, cloud egress cost is here, large enterprise buying here, real cloud consumption price is probably doesn't even I mean, it's it's just it's it's crazy to me how how effective the industry has been at making that number so obtuse and so disconnected from the actual economic reality that they have for these things.
Jay:37:07
I will liken the invoices that I've seen to what a phone bill looks like or what our phone bills look like 20 years ago, and you can't make heads or tails of it. It it's just you have to dig into it and understand it or a utility bill. Right? You you looked at your electrical bill and you get a home, and what's this charge or what's that charge? Or, you know, telecom bill, what's an FUSF charge?
Jay:37:28
Oh, at the end of the day, it's taxes. So I don't see taxes in those invoices. So, again, I can't I I have to go off the data the clients provide me. In the case of the build I'm doing for a customer today, I actually haven't even seen their invoices. They're just giving me the raw data on what it costs them to run another vCPU, and we budgeted and figured out what we could do the hardware for and how long a term and what it looked like.
Jay:37:50
And the math, penciled out. Bigger build at longer term, it's probably gonna go higher. Again, get through proof of concept and make sure that their application can do what they think it can do. And and then but the other benefit is they get to now if it's running in an environment that's just the east or just the west, then then we can throw in the other magic bug buzzword everyone's been talking about for the last 5 years, and let's move it to the edge. So let's stand it up in a data center that's closer to where our people are, our dev ops team, so that they can do their builds and to home.
Jay:38:19
Right? I mean, I I I guess we could talk about Internet of things too if you want. But, I mean
Max:38:23
God bless telcos and trying to create marketing demand for stuff that nobody actually really cares about. I mean, we we get to talk about the edge because everybody's told us to talk about the edge. 15 years ago, the rage in data centers was carrier neutrality. And there was a lot of aggregator, reseller, arbitrage, tier 2 networks, a bunch of different people that popped up into the space. And this this, of course, was I think the tail end of the byproduct of, tier 1 transit being very expensive and opportunities to address that in market.
Max:38:53
And lately, there's been a lot of consolidation within that. There's been exits by different what you would normally put into, like, that large tier 2, maybe tier 1 space. I'm seeing a changing mentality around this where I think the end users are less concerned with I need to be on x y z network as my end all be all and just I need an Internet connection. I think this is also byproduct of cloud to some degree. What are you guys seeing, and how are you approaching network as a strategy with customers?
Max:39:19
I mean, obviously, with bare metal, I would assume. My mom tells me not to assume still, but that I mean, are you getting into network strategy conversations with people with bare metal? Or are you saying, hey. We're gonna provide the network, and you don't have to think about this? And and what's happening in the data center?
Jay:39:32
Some of them. So, obviously, we we have carrier mutual facilities on-site. We'll block carriers that are available anywhere from 8 to 15. We run our own network. Blockative network is robust and has multiple tier 1 providers, carriers blended in with access to peering fabrics.
Jay:39:50
We'll provide transport, whether it's MPLS like circuit from data center a to one of our data center pops b, whether you want a MPLS circuit or even a a wave, a diverse wave if you'd like. Take it both ways around a ring. Or you wanna interconnect from our data center to, in, say, Southern California to data center Dallas. Again, it's there's the ability to take ports, and we can virtualize channels. And I want a gig that runs from Southern California that are data center in Dallas, or I wanna run a gig to a pop that's downtown LA.
Jay:40:23
These are again, we have a robust network that we've built that allows end users different functionality, and we're not agnostic per se. But we are agnostic per se, whereby you wanna connect to a whether it's a packet fabric or or a mega port or carrier x y z. Again, try to be a one stop shop and be complementary and be as easy as you want us to be. Right? We wanna we wanna be the easy button brand users.
Jay:40:47
So, yeah, from a network perspective, we all look different flavors. Right? And I think our network comparatively to some of the other larger co operators is probably more robust. Again, I haven't done the Pepsi challenge. Again, now I'm dating myself to to to talk about that, but at the end of the day, it's what do we have at your disposal?
Max:41:05
I mean, this this just goes back to, like, starting my first colo. We had a 10 meg connection to Exodus, and I mean it was just it felt like the universe at your fingertips and part of my old guy thing has become this this rationalization of how fast networks and Ethernet have gotten. And I'm seeing a lot of 100 gig deployment transit links to customers where they're taking a 100 gig or 2 by a 100 gig for circuits out of data centers. And, of course, from the operator standpoint, 400 gig and 800 gigs are is is a reality and coming. And it always I mean, it that gives me always every time I see one, it's just like, wow.
Max:41:41
Like, contemplating, like, just how much bandwidth that is and how much is actually being consumed and being used and how much ship is being shipped around. How how has that impacted your data centers and customers increasing their port speeds with you? I mean, are you still seeing 1 in 10 gig as common interfaces? Servers are getting seem to be standardizing on 25, 50 gig now. Are you getting a lot of demand for a 100 plus gig interconnects with customer deployments, especially in these larger footprints, megawatt deployment of of ML AIML platforms?
Max:42:12
Like, how how does that change in things?
Jay:42:14
A 100 gig's the the big push now. We're going you're gonna you still have end users that want gig. Right? And they're typically on the smaller size of spectrum. But the bigger the deployment, more throughput they want, typically, it's it's a 100 port or it's a and and if they wanna interconnect or carry directly, it doesn't make a whole lot of sense to to run that through MPLS circuit.
Jay:42:33
So it's typically a a 100 gig wave, and it I'd say, 9% of the time, it's gonna be a diverse wave. So we're gonna give them 2 ports, and they're gonna whether it's a cross connect to carrier x y z that's located in the facility or we're moving it over to one of our other pops, it's that's that's where we're going. I I haven't I haven't got too much into the 400 gig world. Normally, you get into that world. It's, oh, here's start fiber and right?
Jay:42:56
Or, oh, we're just gonna we're gonna provision the circuits to our own. The bigger the bigger the end user, the more network centric and experienced they are. So it's you're not really you're not really handling those types of services. Right? But I would say that and it will probably in the in the shift from 1 to 10 gig being the standard now, 10 gig more often than 1 gig.
Jay:43:18
And very soon, we're gonna be in the and it's just gonna grow tenfold, and I I'm not doing it. I'm firm believer that you just need to skip 40 and go to a 100 because you're just gonna burn the same tech anyway. So why why why give yourself another step you have to deal with in 3 years?
Max:43:33
I mean, fortunately, with the 65176100 series cheese finally seeming to phase out for the most part, 40 gig isn't a big push in a lot of places anymore. I mean, QSFP on Nexus is still a thing, but most modern stuff seems to have have finally moved on from
Jay:43:49
that point. I mean, I recall 7 years ago doing a build for a customer where a 100 gig was a platform they shipped to, and conversations were now, and they're saying, well, we might go into bonding 200 gigs to get to 200. My conversation with them is I I would just go to 4. Right? What's the point?
Jay:44:05
You're all you're doing is a Band Aid on them. Well, not a Band Aid on a bullet wound. But
Max:44:09
Yeah. Well, I mean, you said beforehand, it becomes math. I mean, at some point, you look at what's the cost report? What's the cost for the optic? What's the cost for the cross connect?
Max:44:15
What's the power envelope of the optic and the port on the device? And then you multiply that across whatever period of time that you're running it, and then you compare that with the alternative, and you say, okay, does this actually make sense or not? I think that I I I think that part's always surprising for people. Just how going back to, like, how basic things are, like, just how basic that conversation is. Right?
Max:44:35
Like, it's just, does it make sense, and what's your supply chain, and what's the cost per unit, and what's your sparing cost, and what's your cost for power. And you put that on 2 columns of a spreadsheet, and you look at them, and it's pretty obvious which direction you should go pretty quickly. I wanna circle back to something. You talk about sizing, coming out of out of cloud and figuring out and you mentioned vCPU. VCPU is an obtuse metric.
Max:44:59
Amazon doesn't define, Google doesn't define, Azure doesn't define, Microsoft doesn't none of these platforms define what a vCPU is. They don't define what their their megahertz frequency is. They don't define what their oversubscribe rates are. They just just you don't know what you're getting. So when you start this process and somebody says, we know we've got this fleet of these instances and this many vCPUs.
Max:45:22
Right off the bat, like, how there's there's some math in the in the customer's favor immediately switching into bare metal and physical hardware of just eradicating whatever is going on in the vCPU world.
Jay:45:33
Well, it's defined resource. That's the thing about metal. It's we're allocating you an entire machine. Use it as you wish. You can break it apart.
Jay:45:41
You can overscribe it. So oversubscribe it on your own, but there's no noisy neighbors. Right? And there's no oversubscription at that level. You can oversubscribe your box the way you want, but the reality is, I can't sell a chunk of your server to somebody else.
Jay:45:54
I'm sold you the whole server or at least you the whole server, if you would. Right? Because at the end of the day, you're leasing hardware. You're leasing hardware in a confined environment in a data center world that we're managing facility and with the network connectivity already bundled it. You let's say you need a cloud on ramp and you need so that you're gonna run some loads still in public clouds, but you need interconnection back to it.
Jay:46:15
Again, this big factor, I think, here is to save egress charges. That's readily available to you. So whether it's a cross connect or a a channel to learn a PLS or wave circuit, oh, there's no channels in a wave, but it's doable. Right? Like, it's I don't know.
Jay:46:29
You don't wanna get into the world of where you can customize everything because it doesn't really work and it doesn't scale. But that said, there's ways to build bare metal instances to make your world easier.
Max:46:39
I'm looking through my notes here. I'm trying to think if there's anything that I have not asked you that I wanna get into here in the final moments of this. I'll ask you a prediction question. This will be fun. Let's say 10 years from now.
Max:46:51
What does this look like?
Jay:46:52
I'm assuming that Wow. You wanna know where we're gonna be in 10 years?
Max:46:56
I'm I'm asking you, you've you've got over 2 decades of doing this. So you've seen a couple iterations of this at this point. And so based on your history in the last 2 decades plus, what does the next decade look like? I mean, closing out the 20 twenties, where are we with data centers and cloud and bare metal, and what does this business look like at that point?
Jay:47:15
Well, with 2 decades of experience, I'm gonna say that data centers are still gonna be here. We're not building them in 10 or 20 megawatt chunks anymore. They're building them in 100, 500 gigawatt configurations. I think Chris Crosby from Compass Data Center said the big AI push and there are some other podcasts out there that talk about how many megawatts or gigawatts of capacity that that they that we need based upon what NVIDIA's forecasting for AI builds over the next 3 to 5 years. There's not enough power in electrical United States electrical grid support, even what they're talking about in the United States, let alone globally.
Jay:47:49
You know, nuclear power is clean. It's real. Everybody's afraid of it because it's nuclear. But somebody can deploy and package a small reactor in an environment and put it out where there's direct access to fiber routes and inexpensive access to water for cooling. You can do that.
Jay:48:08
That will be the push. Go nuclear. I I guess we're gonna have to go nuclear, or we're not gonna stay ahead of capacity. And it it's supply and demand game. Right?
Jay:48:15
If if all the supply is consumed, then the demand's gonna drive where we need to go. And if that needs the gigawatts capacity that everyone's claiming, then we gotta solve for that problem.
Max:48:24
So to to put it to regurgitate this regurgitate this. Now this is great. I actually like this. In forecast from the chip manufacturer based on their demand for chips drive a power consumption curve for data centers in the United States which then in turn drive a capacity and resource exhaustion of capacity for energy generation, which then dictate a change to capacity generation, which is probably only solved with nuclear in potentially small reactors attached directly to data center campuses in in purpose purpose built builds. Yep.
Max:49:04
Boy, that'd be fun.
Jay:49:05
Oh, look at, look at what's the big push for Ohio? Why is everybody pushing into Ohio now? It's because there's there's available power. Right? Why does every why did everybody keep building data centers in Texas?
Jay:49:14
Because Texas has their own electrical grid, and there's available power. I don't see California. I don't see us getting them to build more power plants here. So where's their power? Where's their cheap power?
Max:49:23
Yeah. I
Jay:49:24
agree. Cheaper. There's nothing cheaper than nuclear power.
Max:49:27
No. I agree. I mean, this is why Hillsboro became such a big deal. Cheap power. It was Hydro.
Max:49:31
High cheap hydro power. And we're not I don't think that US is gonna build any and dam up any rivers coming in the next decade.
Jay:49:38
So Yeah. The challenge is water. Right? Like, where do we have water? Where do we have available water?
Jay:49:43
Clean water. Or who is there's a the big push for the data center project in Portugal because they were gonna be leveraging the water from the Atlantic. It's readily available. That's really what I'm gonna do. It's it's power and water.
Max:49:54
We're I'm gonna check-in with this is gonna be fun. I'm gonna I'm gonna I'm gonna put a thing on my calendar, and we're gonna have to do this.
Jay:50:00
And I don't know if I'm I'm I hope I I I'll be around, but I don't know if I'm pushing out daisies or hope I'm not pushing out daisies at that point. It won't be that old. But
Max:50:09
I I agree with the thought process because there's been a certain constriction of watt per CPU. There was there was a big explosion of of watt per CPU, watt consumption per CPU, and then the chip manufacturers got smart and started focusing on performance per watt on their CPUs because, of course, that was impacting their their customers' cost, right, and what what CPUs were being selected. And this is probably the a lot of the birth between Intel and AMD in terms of what processor selection. And we can see we can talk about whether or not ARM becomes a big data center platform or if it's a risk actually comes out as a CPU platform for data centers or not and what that power envelope of watt per per watt per megahertz actually looks like. It's not the right measure, but if these things just get I mean, 2 years 10 years ago, if you told me that we'd be deploying a 50 kilowatt cabinet, I would have laughed at you because it made
Jay:51:02
Again, it's a physics thing. Right? Like Yeah. At the end of the day, if you do hot and cold aisle containment in the data center today with the appropriate size, raised floor, and the appropriate size return air plenum, you can do 35 kilowatts of rack comfortably. Our builds in the aligned data centers that that I did back in 2017 to 25 without pushing the envelope too far.
Jay:51:23
Right now, the ask is a 100. Everyone's asking for a 100 kilowatts a rat. I mean, again, it's physics, man. I didn't physics made my brain hurt, so I didn't I didn't really dig into it too far. But smarter people than me smarter people than me understand all those better.
Max:51:35
For those of you following along, we start talking about physics, and we won't get into this at all, but there becomes really interesting design considerations for plenum. And there's actual applications that model airflow underneath race tile floors based on the support choice on the floor where we are either supply or return ducts are related to that floor space. And how airflow moves underneath underneath the tiles and it hits the support stantions and then swirls and what you've inter I mean, it's yeah. Okay. I mean, I'm glad I don't have to figure this out.
Max:52:10
You just have to be on the surface.
Jay:52:11
Starts making your brain hurt, then it's it's it's folks who like physics. Right? Like that.
Max:52:15
I I know. I've I've I'm glad I don't have to figure that part of it out. I just have to say we need a 100 kilowatt per cabinet and go where can we go find it and make it happen. Do you think I mean, Sun Microsystems a long time ago came out with this crazy idea to put a little micro data centers and shipping containers and put them on-site and in the in the parking lots of enterprise users. And it it went for a while, and then there were a lot of data center operators doing pod based construction around these shipping containers where they were able to do rapid assembly and these things were do you think that I approach is dead completely?
Max:52:43
Is it gonna come back? Are we gonna see the resurgence of shipping for colocation with containers inside of servers and containers and containers? It's like container shipping container shipping containers.
Jay:52:52
Well and then you have the what is the flowable container that we're submerging in the in the ocean. Right? The the the submarine data center pods, if you would. I don't know if the container like, you you take a a container and you're boxing in, if you would, right, the environment. And, I mean, the there's data centers that even in a warehouse environment that is these are running today.
Jay:53:14
So I I think it it's more of the modular part of it. I think you're probably gonna see modular designs, and modular designs have been out for better part 15 or 20 years because it's it's again, you go to a box or you build to a contained environment. I don't think they're ever gonna entirely go away. The app there is there are applications where it makes sense. Again, but it's back to power, water, or whatever your cooling medium is, fiber at that point, what's your connectivity to the to the container, if you would, or what's readily available from the upstream connectivity to get to carriers.
Jay:53:45
I I I don't say it's gonna be dead, but big box is built designed to even the the other piece of it too is, like, everyone says, well, why build my own data center? That's
Max:53:56
Don't build your own data center.
Jay:53:57
No. Cost increases. I mean
Max:53:59
Don't do it.
Jay:54:00
There's 2 data 2 data points there. Right? Cost to build tier 3 data center as of last year, 20 December 2022, I believe these are uptime statistics. It was 2,000,000 has risen by 2,000,000 a megawatt. I think the number we've seen has it's doubled since then, and that includes, obviously, a cost to get a rental car.
Jay:54:18
Then you have to get power and water, and you have to get permitting, and you gotta build the box. Just You You gotta wait for years for generators. I mean
Max:54:26
There's there's no world where it makes sense to build your own data center. I'm just gonna put my foot down this one. Just get fiber to a data center close to you and go colocate. Don't build your own data center. Yahoo built what looked like chicken coop data centers in upstate New York, and that was a heat exhaust.
Max:54:39
How do you exhaust heat without using mechanicals in order to get the heat out of facilities. Right? So hot air rises, and you can create airflow and out the coop and have all this natural convection taking place. I think it was I am gonna mix up Facebook and Yahoo Google here. Started experimenting with cold aisle in sorry.
Max:54:58
Hot aisle mist so misting cold water in the hot aisles between their facilities. Now users like Google and Yahoo can or Facebook can create interesting designs because they control it end to end. Right? They're building the hardware. It's getting in the cabinets.
Max:55:12
It's how I mean, everything end to end. And data centers with multi with mixed use and multi multiple users have to be a little more generic. Right? This drives a lot of, like, how much what is your POE in a facility? Well, if you've got some people bringing in HPE and some people bringing in Dell and some people bringing in Supermicro.
Max:55:29
Right? Like, your POE, there's there's a there's a there's a wall. Do you think there's gonna be a day where the data centers become a little bit more prescriptive around the equipment, what equipment the end user can put in? I mean, how it gets deployed, what the build is? I mean, is is do you think that day comes at some point in the next 10 years?
Max:55:48
Or do you think it's still gonna be
Jay:55:49
It's already here a bit to some degree. Again, you're talking about a common air exchanger. Right? It's also hot air out into the environment bringing the colder air because the delta t of having to bring air in from the outside that's at, say, 75 degrees in Southern California as an example. You only have to bring it down to 10 degrees or what have you.
Jay:56:06
Or even then, based on ASHRAE standards, oftentimes, you could just bring in the air, and you just have to exhaust out the other air. That's in place today in a lot of places. The the new build environments in what is now an AI world, and I haven't cracked the Skynet joke yet, so I'm not going to. But the the AI world, it has we have to go to water. And, again, I'm not saying that we're gonna go out and and use a cooling tower and boil off gallons and gallons gallons, thousands of gallons a day.
Jay:56:32
That's not it. We're we're talking a closed loop solution to just do the heat exchange to the plant that's outside, the chiller plant or or what have you. So we gotta build to that, and that's gonna be again, we're we're going with the modular configuration. We're focusing on the densities that were being asked for today and the densities that we know are coming in that environment. Now traditional colo, you can probably retrofit some of it to some degree.
Jay:57:00
But, again, you're gonna want an AI client with water to the chip next to the client that's running Dell and HP gear at a call it an 8 and a half kilowatt stack. So and, again, things traditionally have gotten smaller. Right? Back to the days of 20 years ago where I'm building a 4 u rackmount box with a motherboard, a 3 and a half inch what were they? What was the size?
Max:57:20
3 and a half inch. Yep.
Jay:57:20
3 and a half inch drives that were Yeah. 9? SCSI
Max:57:25
3? Oh, god. SCSI. 9 gig SCSI.
Jay:57:27
We're really Right?
Max:57:28
Like, point 2 gigs. Seagate Seagate 9.9.2 gig Scuzzy. Yeah.
Jay:57:32
Those are
Max:57:32
the hot those are the hotness.
Jay:57:33
Now I'm dating myself. I'm probably dating myself.
Max:57:35
Well, no. You just said you put it into a sun d 1,000 storage array, and then you'd be dating yourself.
Jay:57:40
I still have the Spark 10. Man. We haven't got rid of it yet. It's in storage. I'm gonna turn it into a coffee table at some point or a end table for the for the couch.
Jay:57:48
It just needs 4 legs, plug it in. Oh, it's it's fantastic. Runs sun off.
Max:57:52
Yeah. E 4 fifties light rise from rise from
Jay:57:55
I can I was thinking about those the other day? I remember
Max:57:59
Boat anchors?
Jay:58:00
I remembered. I broke my arm and put 1 in a rack once. They were not.
Max:58:03
If if you don't know what an E 450 is, an E450 was a very popular sun server from the late nineties. I think they probably came out, like, 97, and it was a multiprocessor chassis, multi disc enclosure chassis, and it was a cube. And it was 19 inches wide and probably 23 inches tall and 20 some odd inches deep, and it weighed just it was solid steel. I mean, this is back in the good old days of sun building stuff with, like, quarter inch thick just sheet metal for the enclosures. And
Jay:58:34
Back to the you're quoting a 6509 chassis earlier, right, or or 65 100 chassis. It was the same size as the 6509. It's about 14 u.
Max:58:42
Yeah. And they weighed a ton, and we called them boat anchors because you could pro you could you could attach a chain to it in the water, and it'll probably keep your boat in place. Jay, any last words, any last thoughts that we haven't we haven't gotten
Jay:58:53
to? We we
Max:58:54
I got I got sidetracked here.
Jay:58:55
And you're probably gonna have we may need to have a follow-up to this because once I kinda cogitate on what we've talked about today, I might I might have follow ups, but, no. It's been fun.
Max:59:06
Yeah. Jay, thank you very much. Always a pleasure. Love our chats, and, love love talking about data centers. They're never I agree with you.
Max:59:13
Data centers are never going away. I don't care who's forecasting what. The cloud lives in data centers, by the way. If you didn't know that, the cloud is in the data centers. The data centers don't go away.
Jay:59:21
I tried to explain it my kids. I think that was 10 years ago. Dad loves the cloud. Dad just serves clouds, guys. That's that's that's what what dad does.
Jay:59:29
He runs data centers for people who are on their clouds. It's not actually in the clouds. No. It's not up in so
Max:59:34
The cloud is in my data center.
Jay:59:35
Yes.
Max:59:36
There we go. Little Jay, thank you very much.