Managing huge data sets used to be a problem that only the largest of enterprises had to deal with. Now everyone – from the college student who’s developing the next Yelp, to IBM – is dealing with the three V’s that define Big Data: volume, velocity and variety, and the Big Data security issues that come with them.
Matt Zisow, director of product at online retailer Wayfair, said in a recent interview with ZDNet that, “Big data isn’t just a buzzword or a passing fad; it is a fundamental, increasingly ‘table stakes’ capability for companies in all sectors. If you’re not actively investing in your company’s ability to gather and harness your data in ways that matter to your customers, you’re almost certainly falling behind your competition, even if you don’t know it.”
Investment in Big Data is certainly on the rise. According to IDC, worldwide revenues for Big Data and business analytics will grow from $130.1 billion in 2016 to more than $203 billion in 2020.
The intelligence that can be retrieved from Big Data is changing the capabilities of businesses across multiple verticals. Not only does Big Data allow companies everywhere to become more efficient, it offers access to incredible amounts of analytics to help companies draw insights for their operations and delivers the power to engineer outcomes for success.
However, the latest headlines about the record breaking Equifax data breach should remind us all how sensitive Big Data is. If you’re dealing with multiple, large data sets – which you most probably are – you should probably educate yourself about the main security concerns that Big Data raises, and how to address them.
#1 Data Sources
The velocity and volume of Big Data can also be its major security challenge. Each data source will usually have its own access points, its own restrictions, and its own security policies.
It is also often the case that each source will speak a different data language, making it more difficult to manage security while aggregating information from so many places. One company alone could house multiple Big Data clusters such as in-house development info, Personal Identifiable Information (PII), and major infrastructure data assets.
Should these elements be aggregated and combined into one platform? Or should each entry-point require permission to ensure security on a case-by-case basis?
There are many arguments for and against, but if one system is using 30 different Big Data sources, that’s 30 different breach opportunities for thirsty cyber criminals. It’s one of the first concerns that an organization needs to be aware of when mapping out a data security policy.
#2 Data Infrastructure
When Big Data is stored in multiple locations, it is much more vulnerable than if it were housed in a single high-end, secured server.
Securing data in across different servers will also become more of a challenge as security will require standardization across all locations.
More servers mean various points of access, which could mean greater chances of compromise, when not closely monitored and given the proper security considerations.
In earlier days, some open source players invested a lot in creating platforms and tools that provided better features and flexibility, while putting less of an effort into encrypting data or implementing log-in verification and validation processes.
Some experts and entrepreneurs see this oversight as an opportunity to step in and provide solutions, and the market is certainly open to those types of ventures.
These are the main sources of Big Data anxiety for most CISOs. What can the industry do to relieve some of this stress? It is clear that companies need to find ways to lock down their data assets, that don’t require reinventing the wheel or begging for massive budgets. We’ve put together a quick list of policies and procedures that organizations should implement to keep their data safe.
It might seem obvious, but you’d be surprised at the number of organizations that overlook the importance of account monitoring and control.
It is essential to require stronger passwords, impose a maximum of log-in attempts, and to constantly check for and discard inactive users as a first line of defense to protect your systems from malicious log-ins.
Open Source Security Management
It’s been years since the Open Web Application Security Project (OWASP) updated its famous top 10 list to include “components with known vulnerabilities.” Sadly, some organizations are still slow in catching up to this basic concept.
If you’re using an open source component, make sure it’s secure. In case you haven’t heard this from us before: continuous management of the open source components in your Big Data operations is essential.
Make sure you have a system in place that automatically scans your components, locates the vulnerable ones, and helps you remediate them immediately.
The technology that surrounds Big Data it is still constantly evolving. Periodic audits will help identify any new vulnerabilities and will allow your teams to realign security compliance with updated standards.
Preparing for war in Big Data environment means constantly testing the boundaries. Red teaming your infrastructure is an excellent way to identify vulnerabilities, particularly in third-party and cloud based applications. If an “attack” simulation succeeds – it allows your team to investigate the system’s weaker links, reinforcing them to ensure that your system is secure in the event of a malicious attempt from outside.
Check Your Anti-Virus
Experts are slowly coming to terms with the fact that one of the more effective ways to expand Big Data security is to ensure that anti-virus solutions offer that capability. The major players in this field offer a variety of Big Data security solutions, providing a stronger defense against threats.
Continue the conversation!
Do you have any other solutions to add? Leave a reply in the comment section and continue the conversation.