Tuesday, 26 October 2021

Improving the Signal-to-Noise Ratio


Last week was Splunk‘s 2021 event .conf21, held virtually for the second year and generally offering a great experience and a wealth of content for most people who should be interested in Splunk’s products. I say generally as the video stream slowed down at one point while the platform provider worked out what was happening. This put Splunk’s core message into sharp focus: things are complicated and finding the problem is becoming exponentially more difficult.

Splunk started addressing the concept of observability head-on in 2020, proposing their SIEM heritage as an antidote to the complexity of modern IT estates. The reality of modern IT is not all shiny new cloud-native apps. Instead, the typical landscape is littered with legacy systems, multiple cloud providers, spreadsheets, SaaS products, quick hacks gluing things together despite multiple, duplicative integration platforms. Figuring out what broke, not to mention why, in this typical environment is massively difficult, rising exponentially with the number of systems in play. The world needs tools to help find the problems and provide both immediate remediation and long-term insight for prevention.

The industry desperately needs tools that do not just recognise but positively revel in the messy reality of organically grown IT systems. And that is anything that is more than a year or two old. Splunk is making a strong play for this role under the slogan turn data into doing. Any running system generates masses of data, the hard part is pulling the important bits from it and acting on them – extracting the signal from the noise, as the Splunk execs said.

Observability is right into the literal hands of users with the new Splunk RUM (Real User Monitoring) product for mobile apps. Several other new capabilities have been added, many recognising that the Splunk portfolio itself is suffering from the complexity of scale. Templated content packs and a new visual editor to enable automation are just two of them.

The notion of a single surface on which to examine the data extracted by data collector tentacles tapping into all the elements of the infrastructure is very appealing. An observatory, or perhaps a microscope, for all that is both good and bad in the IT estate is very appealing. The idea of filtering weak signals out of a barrage of noise is also compelling. The question remains whether these benefits will be understood by the broader business audience without an engineering understanding.


Friday, 5 March 2021

2021 Trends in Low Code

New ways to make software are not new, but it is only now that the true potential is being
unlocked. It has taken many attempts to find the right formula, but new low-code tools are now finally delivering on that promise of enabling the creation of great software safely, quickly, and without programming. The first generation of low-code tools was the proverbial faster horse — it worked in much the same way as existing developer tools but with a few shortcuts. The next generation tried to let almost anyone build an app quickly, but without consideration of any potential risks: a classic example of the dangers of power without adequate control. 2021 heralds the arrival of the third generation of low-code tools that balance speed and governance to truly transform business software delivery.

Check out my latest post on Medium to discover why 2021 is the year that low code happens. Learn how low-code tools can modernise your IT estate and deliver the advanced software that your enterprise demands, faster than even before.

Tuesday, 27 October 2020

Deep Dive into Splunk .conf20

Splunk started as a logfile analysis tool, a category that has now been gentrified into the SIEM category. That is how it works, but what it does is best captured by one of the company’s famous t-shirt slogans: looking for trouble. The latest evolution of the Splunk product offers timely and necessary tools that further that goal, but as with any increasingly complex solution, there are corresponding challenges in reaching the audience.

The Event

Being 2020 this was, of course, a virtual event. I was mainly involved in the private analyst sessions, but the website was easy to navigate and well designed to support a global audience. Content was available in multiple languages and structured around roles and skill levels and there was an impressive roster of outside speakers including actors, a singer, and restaurateurs. I don’t normally pay attention to sessions with sportspeople, however this is Splunk so it was a bit different as the chat was with skateboarding legend Tony Hawk. He had also recorded a special video explaining, and demonstrating, the history of the Ollie. Both were excellent.

Despite the content being virtual, we did not miss out on the usual free food and merchandise that make events popular and special. Ahead of time, we received a kit of cool Splunk-branded material and a treasure trove of munchies to keep us attentive and not wandering off for snacks. This hybrid format is definitely the future of events.

The Product

While the theme of .conf20 was creating a platform for “Data-to-Everything” innovation, for me the key message was that the tool is expanding to meet the needs of a world going cloud. Splunk Cloud was launched back in 2013, but in line with any sensible IT organisation, the focus is now fully on cloud delivery. This is particularly important at a time when most businesses are fighting with workload spread across multiple cloud providers, SaaS vendors and legacy on-premises systems. The complexity of hybrid cloud needs to be matched by a cloud native SIEM approach, and this is precisely what Splunk is offering.

The Splunk Observability Suite is their answer to providing a universal window into the innards of your IT estate, reaching across the spectrum of IT roles, including developers as well as technical and operations support functions. Developers will welcome a more standard programming language, SPL2, and commitment to open source. Great support too for a DevOps approach, and they are quick to emphasise that this is using the actual data for feedback, not sampling or predictive, and from some of the client examples that includes a vast amount of data even by modern standards.

Given the ability to handle petascale data, Splunk is also addressing the growing world of machine learning, with the intention of adding data scientists to their target market. Part of this is the introduction of SMLE, Splunk Machine Learning Environment, which I will write about another time.

The Pricing

Splunk is moving its clients from fixed licenses to workload pricing, basically charging for what they use. This is, of course, in line with other SaaS and Cloud vendors, and it makes absolute sense in the new world where workload volumes may change dramatically from unexpected events. That flexibility is invaluable, although it does require a change in thinking from CFOs and the budgeting process. It also allows us This is clearly a big step forward in business model, and judging by the financial figures shared with us, it has been a big success.

The Problem

All this is great, but Splunk now faces three challenges. The first is getting over that looking for trouble message to an ever-broader constituency, many of whom will not understand the mechanisms in the same way as those of us who are used to debugging systems. The messaging will have to be adapted to describe the business benefits more directly, and not the unquestionable technical capabilities.

The second issue, and thanks to Bola Rotibi for highlighting this, is the need for vertical solutions that address industry-specific needs. Splunk needs to expand its range of implementation partners to achieve this, rather than attempting to develop domain expertise in house.

The final challenge is that of converting insights into action. Observability is a great start, but even better would be the ability to recommend fixes or indeed to activate them in well-defined cases. Automating finding the trouble and solution is an obvious objective, but until that can be done reliably, non-technical people will struggle to understand the value proposition as currently expressed.

Friday, 16 October 2020

Reporting from Slack Frontiers

This was my first time attending Slack’s annual event, and of course the 2020 edition was the first virtual version of the event. This was the best virtual event I have attended so far this year but a long way. It was both a superb experience and a great source of insight. Let us look at each in turn.

The Event

I have been saying all year that virtual events lack buzz and are generally very dull, no matter how good the speakers and interesting the sessions. Having been involved in theatre and dance productions for years I know that it’s often the small things that make a difference, and while traditional theatres are all about opulent looking buildings, even a scruffy temporary venue can be given a really exciting atmosphere. This does not translate to skeuomorphic on-line virtual buildings. It translates to more visceral things such as music, count downs, easy to use discussion areas and easy ways to identify who the guests are as well as more private, more privileged areas for people such as the analysts or major customers.

Slack Frontiers was the first event to get this stuff right. It also had a number of other sensible items, such it running for two mornings rather than a single, exhausting day, and using that to run in three times zones to provide global coverage. As well as video sessions there were live round tables where delegate could join and debate that ran faultlessly, as well as live humans on Slack, although by the end of the two days of global coverage they must have been worn out but well satisfied with their efforts.

Another interesting advantage of using recorded videos was that you could turn on captions and watch two sessions simultaneously, jumping into different points as and when they became directly relevant. There was gamification too, logging points for joining sessions, downloading materials and the like. There was even a high-score table with people competing for various levels of bits of swag. I qualified for a pair of brightly colored Slack socks.

Slack is where the work happens

When you work from anywhere, as I have done for a long time, what does workplace mean? Is it my home office, coffee shop, or before the pandemic, airports, and hotel lobbies? Or should we think about it in more technical terms? I am increasingly of the view that we should be looking at what we might call worktops, borrowing a British term for the kitchen counters on which you prepare food.

Laptops and tablets have displaced desktop computers for mobile workers, so using desktop as an analogy is as anachronistic as “dialling” a telephone. Normally the software tools we use sit on the operating system desktop, and the users must go to them, switching context as they do. What we are now seeing with Slack, Teams and other collaboration tools is an inversion. The workflows and activities move inside the collaboration tool.

In some ways this is similar to the notion of container that we had for a while in the world of enterprise mobile apps. It brings advantages for security and ease of use. But these containers did not form a valid destination on their own, while Slack is a key work destination. Slack presented some impressive stats on how Slack accelerates work across the whole business: 13% faster sales cycles, 16% faster marketing campaign execution, and 24% faster HR and engineering where it all started.

These make Slack a comfortable and popular destination in the first place, with users engaging with it automatically while they are working. Slack now provide a range of tools that allow other enterprise applications to be brought into the Slack environment so there is no loss of context, in fact quite the reverse because many work items are trigger on communication anyway.

And triggers are a fundamental aspect of how Slack is providing an event-driven worktop that has the potential to transform and enhance how many people do their jobs. From a programming point of view there are several ways of firing these triggers. At the simplest level you can use the built-in Workflow Builder to automate tasks without any technical understanding whatsoever. As an experiment for a client project I made a few simple workflows in an afternoon. One sent a message to users freshly joined to a channel – it offered a link to video content describing the team culture. Another, most subtle one, is based on reactions. People who responded to messages with a weeping emoji are asked it they are having mental health issues, and if they are offered a choice of a chat, some learning material or coaching.

While these are, obvious, quite trivial from a programming point of view, the mechanism is hugely powerful. However this is nothing compared to the webhook capabilities of Slack apps and the socket-based real-time messaging API. The former allows developers to activate workflows within Slack channels directly. The latter provides what we might call serious integration possibilities, bringing enterprise apps right into the colorful and collaborative Slack experience.

And of course all this is backed up by analytics that allow insight into team behavior and performance, as well as optimizing workflows. This is designed to ensure that when organizations do bring their enterprise apps together into a single, unified interface that IT has tools that they have been painfully missing: understanding where the sharp edges are, the tools nobody uses, and the hot stuff where investment needs to be directed for best return.

All this drives Slack’s aspiration to become a “business operating system.” This great objective sets some very high standards for real-time operating, integration, case management and security all combined with employee experience. Slack also has the advantage of being independent of other platforms making it ideal for companies that have a legacy of M&A or departmental purchases that have left a patchwork of productivity suites. Whatever your circumstances, the idea of bringing the work, tools and content to the user is a huge advantage over making the user hunt about for them. This approach should be informing all CIO strategies.