The three building blocks for resilient organisations: part 2

The three building blocks for resilient organisations: part 2

Data driven business: why analytics needs to be at the heart of your business.

Moving forward organisations will need to consider new sources of growth, which sectors or segments will drive demand, supply chain exposure or the best ways to serve a more digitally engaged customer base.

All these aspects require data and the know-how to use it effectively. Yesterday’s data architecture cannot meet today’s need for speed, flexibility and innovation.

Organisations will need to use technology and data in new ways and accelerate the scope and scale of innovation. Decision-makers no longer need to rely on gut instinct to support their plans, they can draw on extensive and precise information. New external sources of data, pulled into systems, supported by machine learning and AI are at the heart of transformation.

Organisations should systematise processes that empower the front lines to make connections with customers. Investments in digital and advanced analytics that help uncover and track new customer and market insights, will help remove some of the risk inherent in shifting decisions down in the organisation. Leveraging analytics platforms that can develop multiple scenarios or incorporate the latest customer insights into suggestions or actions can further reduce time to market.

But with most organisations in resiliency mode, they need to consider how to ramp up data efforts while managing costs.

  1. Consider a shift from on-premise to cloud-based platforms: Cloud is probably the most disruptive driver of a radically new data-architecture approach, as it offers companies a way to rapidly scale AI tools and capabilities for competitive advantage.
  2. Switch to real-time data processing: these technologies enable rule based or advanced analytics to extract events or signals from the data. Or even integrate historic data to compare patterns, which is especially vital in recommendation or prediction engines.
  3. Exposing data via APIs: can ensure that direct access to view and modify data is limited and secure, while simultaneously offering faster, up-to-date access to common data sets. This allows data to be easily reused amongst teams, accelerating access and enabling seamless collaboration within analytics teams so AI use cases can be developed more efficiently.

By placing bets on multiple ventures, organisations are able to quickly learn what can work at scale.

Creating value with external data

Few organisations take full advantage of data generated outside of their domain. A well-structured plan for using external data can provide a real competitive edge. Data such as weather, search trends data, consumer panels, patent filings or online reviews are just a few examples. Overlooking such external data is a missed opportunity.

Whilst many have made great strides in collecting and utilising data from their own activities, only a few have leveraged the full potential of linking internal data with data provided by third parties, vendors, or public data sources.

Organisations that keep up to date with the expanding external-data ecosystem and successfully integrate a broad spectrum of external data into their operations can outperform other companies by unlocking improvements in growth, productivity and risk management.

Company leaders, or data and analytics officers and teams should learn how to rigorously evaluate and test external data before using and operationalising the data at scale.

Three steps to getting started with external data

Use of external data has the potential to be game changing across a variety of business functions and sectors. The journey toward successfully using external data has three key steps:

1. Establish a dedicated team for external-data sourcing

To get started, organisations should establish a dedicated data-sourcing team. Throughout the process of finding and using external data, companies must keep in mind privacy concerns and consumer scrutiny, making data-review roles essential for peripheral team members. The vetting process should ensure that all data is collected with appropriate permissions and will be used in a way that abides by GDPR and other privacy laws.

2. Develop relationships with data marketplaces and aggregators

Use data-marketplace and aggregation platforms that specialise in building relationships with hundreds of data sources, often in specific data domains. Since these external-data distributors have already profiled many data sources, they can be valuable thought partners and can often save an external-data team significant time.

3. Prepare the data architecture for new external-data streams

Generating a positive return on investment from external data calls for up-front planning, a flexible data architecture and ongoing quality-assurance testing. The final process in this step is ensuring an appropriate and consistent level of quality by constantly monitoring the data used. This involves examining data regularly against the established quality framework to identify whether the source data has changed.

For further information, including nine factors to consider when upgrading legacy and shadow IT systems, download our full whitepaper.

In our next blog of this series, we’ll be exploring the risks of using spreadsheets for scenario planning and forecasting, so keep an eye on our social channels where we’ll be publishing details. Don’t miss our previous blog on hybrid working.

Subscribe to our newsletter

Insights into the latest product features, upcoming events, thought leadership and tips on how to get the most from your Anaplan models.

Bedford Consulting Logo Reverse White

Keep up to date with Bedford on LinkedIn

We’re waiting to help you

Get in touch with us today and let’s start transforming your business with faster, confident decisions.