Thursday, 9 February 2023

Would we even notice a digital pound?


With the announcement that the Bank of England is consulting on the creation of a Central Bank Digital Currency (CBDC) I thought I would write my thoughts about it.

Read the post here.

Tuesday, 1 November 2022

Building Digital Resilience

 Last month I presented at the 7th CENSIS Tech Summit in Glasgow. Here's the video of me speaking:

Thursday, 27 October 2022

Whatever you think is the “metaverse,” this isn’t it.

I was alarmed at the hype being generated around Meta's metaverse announcements and started gathering up evidence to support my position that it is neither new nor the future. 

As part of that process I compared my experience of being very early into the mobile apps space with the current wave of VR/AR/MR/XR claims. I believe I found enough evidence to support my position and that I'm not just being a luddite.

The result was this post on Medium.

Tuesday, 5 July 2022

The Business Debugger: Beer and Observability Escape IT

I was a guest at .conf, Splunk’s annual event for users and analysts. It was both in person and on line. I was one of the several thousand people there in person, about which I will also be writing.

Observability was a big topic, as was Splunk’s journey from log analysis tool to extensible platform.

I find observability quite fascinating. Enterprise IT systems are now so complex and have grown organically for so many years that we no longer know what is happening. They work, most of the time, for some definition of work, however when they break we are often at a loss to know why. Observability is the ability to look into the vast, sprawling patchwork. But this really is just the first step into a new and really exciting business role: observability is escaping from IT into the broader business.

Part of the .conf22 keynote was how Heineken is using Spunk to figure out where lost things go. I’m not sure if that includes socks in the washer, but it definitely included tracking down missing invoices as well as understanding why palettes of beer don’t always end up where they should be. They have used Splunk’s combination of data ingestion, analysis and extensibility to build XOMI (pronounced “show me”) that offers up a comprehensive dashboard of where all sorts of things are across the business, including those missing invoices and the vanishing palettes of beer.

They key point here isn’t really knowing where those lost items happen to be, it’s more about working out which wormholes they managed to fall through to get there. This is the magic, and as a software engineer it suddenly felt very familiar to me. Observability provides the same view point into the state of the overall machine of the business as a debugger does when showing the contents of variables at a breakpoint.

It is axiomatic that once you know what is actually happening it is much easier to fix a problem. Some of the missing invoices were due to software allowing dates to be entered into what should have been purely numeric fields, resulting in the invoice being silently rejected by a more fastidious downstream process. Identifying that missing data entry validation was the root cause would have been an epic task without full observability across the network.

Further, given Splunk’s collection of integrations and ease of adding new connections, it is possible to directly open the appropriate piece of software to fix the problem, at the problem location. This is just like a traditional debugger firing up the IDE from a crash, at the line of code that has failed — but at the macro network level. Incredible power.

This is why observability forms what I think we should call the

. Stuff goes wrong all over businesses, from exceptions constantly firing silently into logs to disappearing product, the combination of data ingestion, analysis and extensibility now provides a powerful tool for sorting it all out. The pandemic and recent supply chain issues has finally put business resilience back on the agenda, and this ability to identify and rapidly correct problems, provides a hugely important tool to build that resilience.

Tuesday, 26 October 2021

Improving the Signal-to-Noise Ratio


Last week was Splunk‘s 2021 event .conf21, held virtually for the second year and generally offering a great experience and a wealth of content for most people who should be interested in Splunk’s products. I say generally as the video stream slowed down at one point while the platform provider worked out what was happening. This put Splunk’s core message into sharp focus: things are complicated and finding the problem is becoming exponentially more difficult.

Splunk started addressing the concept of observability head-on in 2020, proposing their SIEM heritage as an antidote to the complexity of modern IT estates. The reality of modern IT is not all shiny new cloud-native apps. Instead, the typical landscape is littered with legacy systems, multiple cloud providers, spreadsheets, SaaS products, quick hacks gluing things together despite multiple, duplicative integration platforms. Figuring out what broke, not to mention why, in this typical environment is massively difficult, rising exponentially with the number of systems in play. The world needs tools to help find the problems and provide both immediate remediation and long-term insight for prevention.

The industry desperately needs tools that do not just recognise but positively revel in the messy reality of organically grown IT systems. And that is anything that is more than a year or two old. Splunk is making a strong play for this role under the slogan turn data into doing. Any running system generates masses of data, the hard part is pulling the important bits from it and acting on them – extracting the signal from the noise, as the Splunk execs said.

Observability is right into the literal hands of users with the new Splunk RUM (Real User Monitoring) product for mobile apps. Several other new capabilities have been added, many recognising that the Splunk portfolio itself is suffering from the complexity of scale. Templated content packs and a new visual editor to enable automation are just two of them.

The notion of a single surface on which to examine the data extracted by data collector tentacles tapping into all the elements of the infrastructure is very appealing. An observatory, or perhaps a microscope, for all that is both good and bad in the IT estate is very appealing. The idea of filtering weak signals out of a barrage of noise is also compelling. The question remains whether these benefits will be understood by the broader business audience without an engineering understanding.


Friday, 5 March 2021

2021 Trends in Low Code

New ways to make software are not new, but it is only now that the true potential is being
unlocked. It has taken many attempts to find the right formula, but new low-code tools are now finally delivering on that promise of enabling the creation of great software safely, quickly, and without programming. The first generation of low-code tools was the proverbial faster horse — it worked in much the same way as existing developer tools but with a few shortcuts. The next generation tried to let almost anyone build an app quickly, but without consideration of any potential risks: a classic example of the dangers of power without adequate control. 2021 heralds the arrival of the third generation of low-code tools that balance speed and governance to truly transform business software delivery.

Check out my latest post on Medium to discover why 2021 is the year that low code happens. Learn how low-code tools can modernise your IT estate and deliver the advanced software that your enterprise demands, faster than even before.

Tuesday, 27 October 2020

Deep Dive into Splunk .conf20

Splunk started as a logfile analysis tool, a category that has now been gentrified into the SIEM category. That is how it works, but what it does is best captured by one of the company’s famous t-shirt slogans: looking for trouble. The latest evolution of the Splunk product offers timely and necessary tools that further that goal, but as with any increasingly complex solution, there are corresponding challenges in reaching the audience.

The Event

Being 2020 this was, of course, a virtual event. I was mainly involved in the private analyst sessions, but the website was easy to navigate and well designed to support a global audience. Content was available in multiple languages and structured around roles and skill levels and there was an impressive roster of outside speakers including actors, a singer, and restaurateurs. I don’t normally pay attention to sessions with sportspeople, however this is Splunk so it was a bit different as the chat was with skateboarding legend Tony Hawk. He had also recorded a special video explaining, and demonstrating, the history of the Ollie. Both were excellent.

Despite the content being virtual, we did not miss out on the usual free food and merchandise that make events popular and special. Ahead of time, we received a kit of cool Splunk-branded material and a treasure trove of munchies to keep us attentive and not wandering off for snacks. This hybrid format is definitely the future of events.

The Product

While the theme of .conf20 was creating a platform for “Data-to-Everything” innovation, for me the key message was that the tool is expanding to meet the needs of a world going cloud. Splunk Cloud was launched back in 2013, but in line with any sensible IT organisation, the focus is now fully on cloud delivery. This is particularly important at a time when most businesses are fighting with workload spread across multiple cloud providers, SaaS vendors and legacy on-premises systems. The complexity of hybrid cloud needs to be matched by a cloud native SIEM approach, and this is precisely what Splunk is offering.

The Splunk Observability Suite is their answer to providing a universal window into the innards of your IT estate, reaching across the spectrum of IT roles, including developers as well as technical and operations support functions. Developers will welcome a more standard programming language, SPL2, and commitment to open source. Great support too for a DevOps approach, and they are quick to emphasise that this is using the actual data for feedback, not sampling or predictive, and from some of the client examples that includes a vast amount of data even by modern standards.

Given the ability to handle petascale data, Splunk is also addressing the growing world of machine learning, with the intention of adding data scientists to their target market. Part of this is the introduction of SMLE, Splunk Machine Learning Environment, which I will write about another time.

The Pricing

Splunk is moving its clients from fixed licenses to workload pricing, basically charging for what they use. This is, of course, in line with other SaaS and Cloud vendors, and it makes absolute sense in the new world where workload volumes may change dramatically from unexpected events. That flexibility is invaluable, although it does require a change in thinking from CFOs and the budgeting process. It also allows us This is clearly a big step forward in business model, and judging by the financial figures shared with us, it has been a big success.

The Problem

All this is great, but Splunk now faces three challenges. The first is getting over that looking for trouble message to an ever-broader constituency, many of whom will not understand the mechanisms in the same way as those of us who are used to debugging systems. The messaging will have to be adapted to describe the business benefits more directly, and not the unquestionable technical capabilities.

The second issue, and thanks to Bola Rotibi for highlighting this, is the need for vertical solutions that address industry-specific needs. Splunk needs to expand its range of implementation partners to achieve this, rather than attempting to develop domain expertise in house.

The final challenge is that of converting insights into action. Observability is a great start, but even better would be the ability to recommend fixes or indeed to activate them in well-defined cases. Automating finding the trouble and solution is an obvious objective, but until that can be done reliably, non-technical people will struggle to understand the value proposition as currently expressed.