FinOps X Review and Takeaways
Posted
This is going to be a bit of a stream of consciousness blog post as I reflect on what we encountered, what we learned, and how we are going to react to our FinOpsX trip last week in San Diego. This won’t be a comprehensive account, and I may be inspired to say more as I mull through it over the coming weeks, but I’m motivated to get a couple of thoughts off my mind.
I’ll start by acknowledging I went into this trip with some reservations. Some of it was the usual anxiety about trade shows. Would the time be worth the investment? Would I capture any signal from the noise? Would my will for intense mingling hold up over three days? Or much worse, would there even be a forum to actually talk to people or, as Adele would say, would I just keep chasing pavements? Some of my reservations were about how a couple thousand FinOps professionals would respond to our message. I’ve gotten plenty of feedback from prospects and customers alike that Next Signal is addressing a problem that has flown under the radar. But this gathering of ideal customers in the FinOps space was a different kind of opportunity altogether. Hey, we all have insecurities to overcome, right?
What worked (and what didn’t)
No need to go through those anxieties point by point and bury the lead. Three Next Signal team members attended FinOps X, we spent a bit under $10K, and it was completely worth it. One of the best events I’ve attended in over 15 years of industry events and trade shows. If I had spent (A LOT) more to have a booth there, I may not have felt as great about it as this was our first time attending, but more on that later. The two highlights for me were the chalk talks and (don’t laugh) the onsite social events.
The “Chalk Talks” I attended began with a cookie cutter prompt, a big theme to get everyone thinking, but the format allowed us to break down the topic and get into some meaty discussions. I’ve already told my team that when I return to FinOpsX, I am going to align around the chalk talks. They were the right size, addressed relevant topics, and successfully engaged in some thought provoking conversations. Segmenting them by role meant that you were in sessions with people facing the same problems which meant there was plenty of common ground to compare. Notably, it also meant that there was some sameness to the solutions being employed which, in my opinion, needed to be challenged. More on that later, later.
The other thing that worked quite well was keeping people on site together for meals and breaks. This requirement enforced by the event wouldn’t have worked as nicely if they didn’t continuously ply attendees with food and alcohol, but with those bases covered, it worked really nicely to create random individual interactions. I imagined a world where I’d be trying to string together 5 to 15 minute interactions and hoping to schedule more meaningful conversations after the event. Instead, we ended up having a string of 45 minute conversations with depth.
What didn’t work? I heard a lot of grumbling about the suboptimal cost to value ratio of operating a booth here. Vendors weren’t drawing the crowds they hoped for. Because it did a great job of promoting individual conversations and small group learning, people didn’t have much time for wanton marketing. In my opinion, it just isn’t a “wander the floor” type of event and I hope that continues to be the case.
Also didn’t work: indoctrination. There was a lot of “reinforcing the message” going on from event organizers. It can lead to some stagnant thinking. Additionally, it seems like overnight, there are many dozens of FinOps companies trying to carve up the savings and management pies into smaller and smaller pieces. They are all doing so with the same tool sets and then wondering why so few people are stopping by their booth.
Real-time Evolution
Finops as a practice is relatively young and it is interesting to watch the evolution in real-time. There are many large companies relying on single person practices (or very small teams) to coordinate huge amounts of spend and it’s easy to imagine advances in AI keeping those teams small. Lots of talk before and after about the impact AI is having on cloud spend, where resources are going, how AI is being used to manage cloud infrastructure and how it will be used to manage budgets. The conversations are currently very centered around making a better widget. I see this as a push to perfect existing tools and a broad understanding of the massive dollars at stake. I also see opportunities to broaden the scope of how the FinOps practice can contribute.
I attended two Chalk Talk sessions targeted at leaders. One of them was about optimization and the other was about anomaly detection. I’m curious if I would have walked away from an engineering chalk talk feeling differently, but in the leadership group, something jumped out at me: there still isn’t enough communication between the technical and finance teams.
- The optimization discussion was initially about optimizing infrastructure to get the most bang for your buck. I understand that we are having a conversation about managing infrastructure spend and so conversation will be centered on that. But many companies still have applications that are not cloud optimized or cloud native. Optimizing infrastructure allocation and spend will only get you so far. At some point, they’ll want to explore better returns looking at the application architecture.
- The discussion about anomaly detection started down this path of using billing to catch infrastructure allocation problems. When you are on the finance side of the problem, it makes sense to look at billing for solutions. But the success of anomaly detection is finding problems early and the absolute best case scenario in detecting an anomaly through billing is maybe a day? Usually it will be quite a bit more. Unfortunately you are likely to spend a ton of money before the bill tells you there’s a problem. Your cloud engineering teams already have anomaly detection tools. The conversation should be about how we get them talking to finance faster rather than how finance can maximize the value of inherently reactive tools.
The FinOps Tool You Aren’t Using (but should)
Speaking of anomaly detection, there is at least one important reactive tool you probably aren’t using: your SLA. I think vendor downtime should be more of an anomaly than it is and based on the feedback I was getting, FinOps leaders, Cloud Engineering, Customer teams, and Corporate Management teams agree with me. The market is doing a great job churning out software that manages compute and reserved instances and keeps track of what you have deployed, but does little to help you hold vendors accountable when they fail. The returns on a better widget keep diminishing and our cloud spend keeps growing. Using a solution like Next Signal to recover credits for downtime, FinOps teams have an opportunity to get more creative about delivering value back to the company. Customers are bearing the burden of margin erosion, not their cloud providers and yet we keep paying them for downtime. Super excited to be introducing customers to Next Signal as a means to take back some leverage.