Three big problems with big data projects


By Andrew Collins
Thursday, 21 November, 2013


Three big problems with big data projects

Big data makes big promises. It’s often presented as a way for you to learn everything you might want or need to know about your customers, your product, your company, your workers, your market and almost anything else. Throw a bunch of data together, run some analyses and you’ve basically got Douglas Adams’ Deep Thought at your fingertips.

Marketing aside, there have been some interesting big data case studies recently that show off genuinely impressive results.

But while a perfectly executed big data project can indeed reveal useful or surprising insights, there are some large stumbling blocks that can hamper the progress of big data projects.

Data variety

Big data is often described in terms of ‘3 Vs’: data volume (size of data stores), data velocity (the speed with which the data moves through a system) and data variety (different types of data, like text, images, audio, video and so on).

Ian Bertram, Asia Pacific head of research at Gartner, said that with big data, “The biggest challenge actually is variety. When we survey our organisations, over 50% say their biggest challenge is the variety of data.”

You may have stuff in your data warehouse, Bertram said, “but then there’s audio, video, stuff in other systems, email, operational data. There’s all of this stuff and they’re all in different formats. How do I make sense of all of that, and how do I bring it together in a meaningful way? How do I mash it up?

“How do I serve it up to users in a way that they can actually trust the data, that they know that they’re using it contextually correct,” Bertram says.

The network

Data velocity also proves to be a big challenge for some big data projects.

“[Organisations] might have the processing power to chunk large volumes of data, but do they have the network infrastructure to pass it really quickly, to move it around and action it really quickly,” Bertram said.

Network bandwidth will continue to be a challenge for big data projects, and that’s where new technologies like MapReduce will come into play, Bertram said.

MapReduce “allows you to aggregate information in a digestible way, so you’re not pumping everything down a pipe; you’re pumping an aggregated version of things down a pipe. That’s where things will start to change,” he said.

Skills and processes

One of the biggest challenges in a big data project is actually taking action on an insight once you’ve learned something from your analyses - like if you learned, for example, that you need to change your pricing, your product or the way you go to market.

“The value of all of this stuff is in an organisation’s ability to action this. It’s great to invest in all of this technology, but are you investing in the skills and processes for people to actually do something about it?” Bertram said.

“We can talk about the 3 Vs and that makes perfect sense for a lot of organisations. But then you get down to the nuts and bolts and around skills and capabilities within an organisation - what are you doing around investing in that? And that’s where organisations fall down.”

This difficulty for organisations to capitalise on an insight they’ve gleaned from big data is “constantly” an issue, Bertram said.

Pictured: Gartner’s Ian Bertram.

Related Articles

Big AI in big business: three pillars of risk

Preparation for AI starts with asking the right questions.

Making sure your conversational AI measures up

Measuring the quality of an AI bot and improving on it incrementally is key to helping businesses...

Digital experience is the new boardroom metric

Business leaders are demanding total IT-business alignment as digital experience becomes a key...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd