Open Source = Open Door?

FOSS or paid, any OS or app that's commonly used will be a potential target for hackers.
12 October 2018 | 10 Shares

Free or paid, as long as it’s popular, it’s a target. Source: Shutterstock

There seems little doubt that at server, HPC, and stream computing levels of enterprise-level technology, open source is a clear leader.

From the early days of computing, open source solutions would vie in competition against proprietary applications and OSes. Whether it was Microsoft versus Red Hat, or more latterly, (nominally) open-source Android versus iOS, by sheer dint of numbers of instances, open source, or rather, FOSS solutions dominate.

Open source has always had something of the moral high ground over its more monetarily-driven proprietary cousins. But any moral advantage becomes moot when we examine the security issues.

FOSS software or systems are no more or less susceptible to cyber attack than any other; rather, the way community-created systems & commercial are both formulated has led to a similar cybersecurity position.

As is often the case with cybersecurity risk, many problems arise not because of any inherent unreliability of software or the operating systems on which it runs, but rather by lack of coherent human activity.

Vanilla Apache web server daemon(s), in its latest guises, may well be inherently a pretty tight ship, but any deployments which are not patched are more prone to attack.

If a system is popular among genuine users and administrators, it will too be popular among hackers, criminals and the ne’er do wells who lurk on the internet.

Why target a single, bespoke server, when by looking for hacks on common installs, hundreds of thousands of potential targets present themselves? Linux and open source software is now popular – great news for the FOSS community – but that attractiveness will also bring the wrong sort of attention, too.

Patch whatever, whenever

Step one in cybersecurity, we’re told, is the application of patches – updates from the author(s), or bodies representing them. Many IT teams do not deploy every single patch that crosses their desks. Some wait for roundups, or significant updates.

Reasons for this are many and various, but one substantial reason is that without enterprise-wide testing of updates, it’s just not safe — in a business sense — to deploy untested updates. Testing takes time, and there’s often not enough of it in a day. So, systems remain unpatched for relatively long periods.

As a case in point, those Apache-based web services: how many sites are running Apache versions that aren’t absolutely bang up to date?

A significant number, in all probability. Aware of their services running slightly old software, the quandaries facing systems administrators are along the following lines:

  • If we apply the very latest patches, will they break our systems?
  • Do we have the time and resources to test those patches and updates in a sandbox environment?
  • Is there a significant risk without the latest patches? Is the very latest release not addressing security vulnerabilities specifically?

The blame game

Many pieces of software are created with security at their core, both in open source development and in proprietary, commercial development houses.

There are plenty of examples of both commercial and “free” software being breached by hackers. But, when security vulnerabilities become apparent, it’s easy to point fingers at brand names. Less simple to point the finger at 2000+ developers who are behind an open source software package.

In many industries such as healthcare, insurance, and banking, business-critical systems often run on older hardware and software.

They rely on complex infrastructures which just cannot be overhauled or updated overnight. Some industries are still running software which can trace its roots back 30 or more years when a “hacker” was someone who rode a horse cross-country, and the internet was a club for academics, governments and researchers to exchange research papers.

Many organizations are more minded to shore up crumbling IT structures rather than commit the massive resources required to rebuild legacy systems from the ground up.

This approach is not only cheaper but as a bonus, it doesn’t involve the potential for professional suicide caused by a huge, expensive, IT re-engineering project’s failure.

As a result of this, cybersecurity experts employed in-house by end users take on the role of firefighters. Or, as Ilia Kolochenko from cybersecurity outfit High-Tech Bridge states, “I can’t change the risks. Instead, [staff] become a fireman ambulance, policeman, an emergency worker — all rolled into one.”

The sticking plaster approach to cybersecurity is the best many organizations can manage. Source: Shutterstock

The nature of large software applications is that many people are involved in their creation. In open source, there’s sometimes a lack of coherent steerage.

And the larger the package, the more room there is for cybersecurity flaws. When huge systems are deployed without proper third-party counter-measures, like openEMR, says Kolochenko, they are often open to risk – like the exposure of millions of patient records.

Kolochenko comments: “No-one’s out to create deliberately vulnerable software, but even the most modern OSes are not adapted for cybersecurity reliability. They’re not necessarily insecure, but they’re not essentially reliable.”

Whatever solutions you deploy, sensitive information needs protecting by your own policies, therefore — open source or proprietary developers can’t be relied upon to build in cybersecurity to levels acceptable by every end-user.

FOSS or not, adequately isolated VLANs, two-factor authentication, limiting access to local users only, and removing all outside access — these are the policy options open to the enterprise.