Constant Companions: Giving Passwords and Passphrases Thier Due

“Through 20 years of effort, we’ve successfully rained everyone to use passwords that are hard for humans to remember, but are easy for computers to guess.” Randall Monroe, XKCD

For users, passwords and passphrases are a way of life. How else can an individual not only identify themselves to access necessary services but also prove that they are who they say they are without biometrics? However, the way in which many businesses choose to think about passwords and passphrases is not only wrong, but harmful. Many financial institutions, as well as work places, require that passwords max out at a short, fixed number of characters (anything between six and twelve), include an uppercase and lowercase letter, as well as at least one digit. This is, unfortunately, not an ideal solution. In essence, any organization requiring that users make passwords under such conditions is setting their users up for failure on a multitude of levels. Not only are these passwords easier to crack than other options but they typically cannot be memorized, requiring the user(s) in question to write them down or store them elsewhere.

Data released in a recent study by Carnegie Mellon University’s CyLab indicates that traditional methods for password and passphrase creation are woefully inadequate and that a great many users have a mistaken idea of the methods in which adversaries employ in the attempt to crack them.

This study reveals that, “Participants, on average, also believed any password with numbers and symbols was a strong password, which is not always true. For example, p@ssw0rd was thought to be more secure than pAsswOrd, but the researchers’ attacker model predicted that it would take 4,000 times more guesses to crack pAsswOrd than p@ssw0rd. In modern day password-cracking tools, replacing letters with numbers or symbols is predictable.”

The question then becomes, what can the user do to avoid this situation? The engineers at MetaFlows have a very unique way of creating passwords/passphrases that are much more secure. There is a basic equation for password strength, failing that the password appears in a known dictionary, is:


Complexity being the number of possible characters the password contains

So a password using only lower case letters has a complexity of 26

A password using lower, upper, and numbers has a complexity of 62

A complex password with a length of eight:

62^8 = 218,340,105,584,896 possibilities

A simple password with a length of twelve:

26^12 = 95,428,956,661,682,176

The longer, but simpler, password in this example has a total search space 437 times greater than a standard “complex” password. This is not to say that complexity is bad, complexity helps, but length is the dominant factor in determining strength against brute force. It should be able to be memorized, so going ahead and adding a number or a weird character is fine. However, if adding that element makes it too hard to remember then consider tacking on another word instead that is easier to remember to increase the strength exponentially.

What is the difference between a password and a passphrase?

The example password meets all standard complexity requirements: lower case, upper case, number, and special character. One of our engineers decided to see how long it would take for them to crack this password. The end result is as follows:


Search Space 6.70×10^15

Single Machine traditional estimated crack time: 18.62 hours

Cracked during several hours while playing WoW and a good night’s sleep.

The experiment was repeated with a passphrase, which is a group of words strung together that act as a password. The passphrase below meets none of the standard complexity requirements as it is all lower case and contains no digits. Unlike Pa$sw0rd, it is easy to remember.


Search Space 4.33×10^39

Single Machine traditional estimated crack time: 13.76 million trillion centuries

Still not cracked long after the death of our solar system.

In most cases, adding a few words that are related to the site or process in question is helpful to remembering them but we also know that people are surprisingly good at remembering almost any silly combinations of words as a passphrase. The more unrelated the words chosen are, the less likely they will ever end up in a dictionary. Picking one nonsensical word increases the potential strength against dictionaries to a level that is realistically beyond guessable. For example, “mypasswordisnotpassword” may be obvious enough to get added to a dictionary, but “mylongitudinalpasswordisnotamonkey” is arcane.

Another method, advocated by Micha Lee at The Intercept_ is Diceware. The method for creating a Diceware password is simple and straightforward but the end results may lead to a far more secure passphrase. The Diceware method is effective because it will provide randomness that the human brain cannot. The value of using a method that involves randomization is ideal when one considers entropy. “The amount of uncertainty in a passphrase (or in an encryption key, or in any other type of information) is measured in bits of entropy. You can measure how secure your random passphrase is by how many bits of entropy it contains. Each word from the Diceware list is worth about 12.92 bits of entropy (because 212.92 is about 7,776). So if you choose seven words you’ll end up with a passphrase with about 90.5 bits of entropy (because 12.92 times seven is about 90.5).”

Once a user creates a password, one must have a clear idea of where to store it. While there are numerous password saving applications available on the web and scraps of paper abound, nothing is more secure than pure memorization. When considering password creation, always stick to something easy to memorize as well as difficult to crack. To put it plainly, storing passwords anywhere other than the human mind creates an exploitable vulnerability. This of course, includes writing them down on a sheet of paper and attempting to hide it. The popularity of password storage books and password applications is no indication as to the level of security they provide, which is limited at best.

No matter how random and entropic a password may be, it is vital that if using the same password for more than one service, that passwords used for social media accounts should in no way resemble those used for online banking and other vital activities. It cannot be stressed enough that reusing passwords, sharing passwords, recording passwords, and repeatedly recycling through a set of passwords is far from advisable.


FireEye Foibles

On February 15th, Blue Frost Security released a statement regarding an analysis engine evasion that was identified in regards to FireEye’s virtualization-based dynamic analysis. Their statement reads, “An analysis engine evasion was identified which allows an attacker to completely bypass FireEye’s virtualization-based dynamic analysis on Windows and whitelist arbitrary malicious binaries.”

We now have a signature to detect this method of evading FireEye:

2022554 || ET EXPLOIT FireEye Detection Evasion %temp% attempt – Inbound ||

Any customers who are also using the FireEye system may want to set up additional rank or email classes for this rule so that they can be alerted to malware that may be attempting to bypass their FireEye appliance. FireEye has released an update for this that users should apply immediately, if they have not done so already. However, even once the issue has been patched, seeing the attempt of this bypass can be a valuable indicator of malicious activity on its own. This may be tried alongside future evasion attempts.

Taking Care of Business: The FTC Guidelines Part Two

20699620022_47e832b2ee_oThis post is a follow up to Part 1 of Taking Care of Business: Information Retention & Responsibility. Here, we will be covering items six through ten, with a wrap-up of what this document means and what we can do to help you stay secure, ethically responsible, and on the right side of the FTC’s standards.

Item six on their list is, “Secure Remote Access to Your Network.” Their bulletin points under this heading are Ensure End Point Security, meaning that you must control who can log on remotely and determining that they are doing it safely. One way is to require two factor authentication for logins. This demands that each user have the ability to generate a token on a separate device (a cell phone) and use that in combination with a token created by a key fob. Biometrics and PINS are also considered types two-factor authentication.

The FTC would also prefer that businesses limit the amount of access that users have when away from the office. This is the part where it is useful to discuss third party access. By restricting the amount of data and the type of data that a third party or an off site worker can get to means that the truly important data has a better chance of staying safe. Offering limited, one-time access is a great way to approach giving accessibility to a third party user.

Item seven on the list is, “Apply Sound Security Practices When Developing New Products” and the first subheading asks that you “train your engineers in secure coding.” This is something tackled at the pre-design stage. It is up to your software developers to create code that is secure and will not unnecessarily put your business and clients at risk. For that to happen, they must be trained effectively on how to do so. A lack of education and foresight at this stage could be fatal before your product or service even launches.

The FTC’s second sub-heading involves following platform guidelines for security. Secure development practice guidelines are out there, and available for use. Failing to follow these can allow you to open your business up to man-in-the-middle attacks through mobile applications and other dangers. It is not a requirement that one reinvent the wheel, but instead use resources that are already available in regards to creating secure software.

The last two bullet points are closely linked, “Verify that privacy and security features work” and “Test for common vulnerabilities.” This is something that even the big guys miss, much less the smaller companies out there. Often, it is smart to invest in an individual or company that provides penetration testing (pen testing). It is their job to try to get into your network in as many ways as possible. They will evaluate any weaknesses that exist within your code, and review the results with you. Large companies such as Microsoft and others, offer Bug Bounties, meaning that if a hacker (with their permission and under their conditions) finds a bug or security issue with their software, that hacker is rewarded and the bug can be fixed. Adobe, after some major security gaffs, has enlisted the help of the Bug Bounty program to help tighten up their software.

Issue eight of the last ten states, “Make Sure Your Service Providers Implement Reasonable Security Measures.” Since points six and seven warn you to get your software and users in line, the natural progression leads to the idea that you should evaluate anyone that you do business with. They advise that you “put it in writing” and “verify compliance.” Your security measures matter as much as the security measures of the individuals that provide you with valuable services such as connectivity and cloud computing, just to name a few. Taking someone’s word or accepting a hand-shake with the assumption that any promises they make outside of writing will be upheld is inadvisable at best. Any company’s website should list their regulatory compliance information, which is easy to verify. This is ours.

In point nine, “Put Your Procedures in Place to Keep Your Security Current and Address Vulnerabilities That May Arise,” they put their focus on not only how you go about maintaining your security practices, but also those of any third-party vendors you may work with. This is where documentation is essential, to prove that should you be summoned to court that you have been maintaining a good-faith relationship in regards to your security. Also, even after the pen-test phase, it is vital to keep on top of any perforations in your company’s defense against adversaries. If six months or nine years after a product is released, one you are responsible for, you must act upon any reports warning of a security risk with your product. Put together a way of collecting these issues and a mechanism in place to address them. Do not let them get lost in the shuffle and ignore them at your peril. This of course, also requires that you stay on top of any third-party services or vendors you may use to make sure that they are making good on their promise of security to you.

Last but not least, the FTC advises that you, “Secure Paper, Physical Media, and Devices.” Everything that was already recommended in regards to your network and digital data also applies to any hard copies. The FTC asks that you, “securely store sensitive files,” “protect devices that process personal information,” “keep safety standards in place when data is en route,” “dispose of sensitive data securely.” All of this may seem like common sense and somewhat of a no-brainer but it is worth remembering that if enterprises, both small and large, did these things the FTC would have never had to address gaffs in data containment by Rite Aid, CVS Caremark, and many unfortunate others.

We decided to use this precious blog space to bring these ten items to your attention, as it is our goal to keep you and your data safe. The MetaFlows MSS is continually evolving to help you better protect your enterprise from adversaries and the potential legal fallout from any success that they might have not had otherwise. A tired truth is that a best defense is actually a good offense and in the world of business and information security, having the right service in place can make all the difference.

Common Threads in Black Hat 2015

130659908_922e26a071_oWhen discussing the need for tighter, and better cyber-security one of the common themes discussed at Black Hat centered around the lack of research and preparation on the part of software developers. Katie Moussouris, in speaking at the special event, “Beyond the Gender Gap: Empowering Women in Security,” mentioned that her career revolved round encouraging software developers in major corporations to address security at the design stage or as early as possible in the development phase. The issue with this, of course, is that if a potential exploit is discovered, the individual responsible for that discovery would receive no credit for it. The fix would simply exist as a part of an after-thought – thus encouraging the habit of sitting still, waiting for the problem to become evident, and then offering a security patch. When internal efforts fail, it would behoove developers to seek outside assistance. However, this solution is one that is not readily accepted. In the panel, Moussoris cited Microsoft’s initial commitment to not pay individuals to hack their product, and the challenges she faces in encouraging software developers in their creation of their Bug Bounty programs on sites such as Moussoris’ HackerOne.

In the instance that companies like Adobe institute their Bug Bounty programs, they range in effectiveness as participants can be awarded in everything from cash to a high-five for their efforts. However, when one considers how many vulnerabilities continue to crop up in Adobe’s software, a high-five may not be enough. Given the compromises that their Flash updates have caused, it is clear that Adobe’s approach is failing. The gravity of this issue is especially evident as Cisco’s most recent Midyear Security Report and resulting blog entry call upon companies, “To reduce the occurrence of these common code errors, software developers should participate in regular security training to build awareness of current vulnerabilities, trends, and threats.” Although the ball for creating, publishing, and updating secure software lies within the hands of software developers, only a naïve or irresponsible user would sit back and wait for the developers to handle it.

The pro-active approach, on the user end, is to assume that every software system is inherently flawed and problematic – to have a security solution already in place that can detect when employing a new software system has unintended and quite possibly, disastrous consequences. Defensive security systems must be flexible enough and powerful enough to meet evolving threats coming from an onslaught of flawed software systems and riddled web user interfaces, that can catch users unaware but ideally, not unprepared.

As the Internet of Everything becomes more of a reality, it is the onus of the user to make sure that they are meeting the challenges that come with it. Conferences like Black Hat open up the dialogue by asking important questions, the most resounding being, “What do you plan to do to keep your information secure?” In a room full of options, this question may seem both overwhelming and considerably difficult. No one can afford to spend money on services that (while not being comprehensive) will not work with others, in accidentally duplicating coverage, or even investing in a system that flat does not meet the demands of a connected world.

Finding solutions and making connections are why security professionals attend Black Hat. At the MetaFlows kiosk, our engineers were able to explain to professional after professional as to why the SaaS model works and how the MetaFlows MSS is a cooperative solution that pulls from a variety of sources, partnering with Emerging Threats, Cyber-TA, and Virus Total, to name a few. As Microsoft plans to release Windows 10 and Adobe continues to update their products, it is imperative that every user have a security plan in place to protect the integrity of their data.

MetaFlows: SC Magazine Innovators Hall of Fame

sc_logo_21413_345884Our friends at SC Magazine have inducted us into the SC Magazine Innovators Hall of Fame. It is nice to be recognized for our innovations. Importantly, this is purely based on their journalistic curiosity; we give them props for performing their reviews based on sound technical knowledge. We refuse to pay money for recognition. You might think we are old-fashioned but this is how we roll at MetaFlows.hall_of_fame_495827

Their article also points out the importance of monitoring beyond the network perimeter using multi-session correlation. If you are not sure what multi-session correlation can do for you, it is best for you to put it to the test. You will be amazed of what you can find out about your network.

Read the article at SC Magazine’s Website

What’s Wrong with Sandboxing?

How Sand-Boxing Works

The latest and hottest trend in cyber-security is sand-boxing. Sand-boxing is virus detection on steroids. Instead of relying on prior knowledge about particular viruses, this technique emulates a user’s workstation with a sandbox and tracks anything that attempts to go out of the box or attempts to infect other machines. The process is straightforward:

  1. Get all potentially infectious content coming into your organization, and
  2. Emulate each piece of content as if it was executing on your hosts.

Limitations of Sand-Boxing

Sand-boxing has low false positive rates, but causes a lot of false negatives. In other words, when it tells you that something is bad, it is almost certainly bad. But it has the potential to miss a lot of bad things.

Architectural Limitations

PerimeterThis limitation has to do with step 1 above (get all dangerous content coming into your organization). Your defense perimeter is dissolving because of new network trends and applications:

  1. Mobile devices continuously come into and go out from your network.
  2. Peer-to-peer protocols (which go right through sand-boxing and firewall appliances) are becoming mainstream (skype, bittorrent, b2b applications).
  3. Services are being pushed to the cloud, out of the grasp of your sandbox.
  4. Virtual machines move around at the speed of light from one host to another.
  5. IPv6 and other emerging trends are facilitating end-to-end encrypted tunneling right through your perimeter.

So, if you do not have a perimeter, how do you know what is coming in? Well, you don’t! That is why sand-boxing (or pure virus detection) is limited in scope and cannot survive the evolution of malware.

Another architectural limitation has to do with cost. If you run a large network, executing and/or opening every piece of content before it is delivered requires a lot of CPU and will slow down your network. Sand-boxing can only scale to a certain size; beyond that it becomes unrealistic and expensive.

Algorithmic Limitations

EvasionThis limitation has to do with step 2 above (emulate each piece of content as if it was executing on your hosts). Evasion is an information security term that refers to the ability of the bad guys to:

  1. Know how you are detecting them and
  2. Add subterfuges to defeat your specific security measures.

A sandbox can be detected. Once malware realizes that it is in a sandbox, the malware will switch to its best behavior so that the sandbox is happy. Only when the malware gets out of the sandbox and on to the the actual target device will it do its damage.

A second algorithmic limitation is that not every system is the same. Sandboxing a particular version of Microsoft (which is what commercial sandbox solutions do) leaves all you other devices (Linux, Apple, Android, etc.) completely open to attack.

How is MetaFlows Better?

MetaFlows is not an antivirus. We detect the attempts to introduce a virus in your network AND/OR detect the presence of a virus. Think of it as a network-level sandbox that not only inspects individual pieces of content, but also keeps track of the behavior of all your devices over time. There is one thing a malicious host cannot evade: being malicious!

If it looks like a duck, swims like a duck, and quacks like a duck… it is a duck.

How does it work?

MetaFlows looks for classes of odd behavior from hosts on your network:

  1. Scanning behavior
  2. Being attacked on vulnerable ports
  3. Downloading dangerous content
  4. Communication with questionable sites or sites that are already known to be bad
  5. Scanning outward or doing a lot of DNS lookups

If we detect behavior from multiple event classes over a time period (ranging from minutes to hours), MetaFlows triggers an alert.

Here is simple example:

  1. External host B performs a brute force attack to guess your password on port 22 on server A .
  2. One hour later there there is a large transfer of data from server B to another server C (on your network).

Bang! That’s a hit for us. But a sandbox has no clue! By itself, a sandbox would not detect this behavior. The malware could “play nice” once it realizes that it is in a sandbox. The sandbox would then allow the malware to leave and get inside your network, where it could do substantial damage. But MetaFlows can keep an eye on software even after it leaves the sandbox.

biohazard-laptopThe main advantage of a network level sand-box is that it does NOT solely rely on inspecting content (like an antivirus) but instead detects malware in the act of being bad. So, if someone walks in through your front gate with an infected laptop, as soon as that laptop misbehaves, it will be flagged down.


The best part is that MetaFlows works regardless of what devices are on your network – it solves the algorithmic limitations of sandboxes. Our behavioral event classes do not depend on the type of system: if an internal host is performing outbound scanning, we do not care if it is a Microsoft device or an Apple device. All we need to know is that it has engaged in malicious behavior.


networkcableFinally, our approach is much more scalable than a content sandbox. MetaFlows mitigates the architectural limitations of sandboxes by scaling to 10 Gbps links with standard off-the-self quad-CPU systems. The cost and power consumption are orders of magnitude lower.

Predictive Correlation — The Future of Cyber Security?

What is Predictive Correlation?

Research funded by the National Science Foundation has led to the development of a proprietary inter-domain correlation algorithm that is mathematically similar to Google’s Page Rank algorithm. Event scores are autonomously obtained from a global network of honeypot sensors monitored by the MetaFlows Security System (MSS). The honeypots are virtual machines that masquerade as victims. They open up dangerous ports/applications and/or browse dangerous websites. As the honeypots are repeatedly infected, the MSS records both successful and unsuccessful hacker URLs, files, bad ports, and bad services. When a honeypot has a security event that triggers a false positive, the alerts for those events are ranked negatively, thus providing insight into events that should be routinely ignored or turned off. Security events that trigger true positives are ranked positively, thus improving their visibility. This information is then propagated in real time to each of our subscribers’ sensors in the system to augment traditional correlation techniques. This additional inter-domain correlation is important because it adds operational awareness based on real-time intelligence.

How does it work?

As shown in the figure below, honeypots work behind the scenes, continuously mining global relevance data and flow intelligence (IP reputation) for threats that penetrate differing degrees of cyber-defenses on different types of systems. After this step, annotated data from all network sensors (whether the sensors are honeypots or not) are compared and events are correlated with an algorithm similar to Google’s Page Rank algorithm: (X = bs + aW*X) .

Diagram of MetaFlows event correlation system
Figure 1: Predictive Global Correlation

This process is designed to provide subscribers with intelligence data that takes into account the similarities and differences between the sources of the data. For space limitations we cannot explain the math and why it makes sense; however our system builds on the work described in “Highly Predictive Blacklisting” by Jian Zhang, Phillip Porras, Johannes Ullrich, SRI International and the SANS Institute in Usenix Security, August 2008 (we highly recommend that you read this article).

So What?

As a result of the algorithm, once a piece of intelligence reaches our system it is not equally distributed to all customers. Instead, it is mathematically weighted and routed to where it is most relevant, just as the first few web pages of a Google search yield the most relevant information for a particular search.

In addition to real-time intelligence on true positive security events (positive ranking), our system also provides information on security alerts that are irrelevant by demoting them and reducing false positive clutter. In other words, this system can propagate known false positives and known true positives among sensors using a mathematical model that maximizes prediction.

Graph of prediction power for MetaFlows ranking algorithm

The graph above quantifies the prediction power of the ranking algorithm. The experiment was carried out on the Snort event relevance data gathered between February 7th, 2010 and February 22nd, 2010. At the start of each day we performed the ranking operation over the previous day’s Snort event data and compared the predicted ranking values with the actual events gathered during that day from the sensors and honeypots. The simple prediction (blue line) is based on predicting that, for each sensor, the same event ranking is carried over from the previous day without running the algorithm (this is what people normally do today).

The Y axis is the hit ratio. The “hit ratio” is defined as the number of times the prediction matches the outcome in terms of the sign (positive or negative), divided by the number of non-zero rankings predicted.

  • We increment the hit counter if the prediction and the outcome have both positive rankings.
  • We increment the hit counter if the prediction and the outcome have both negative rankings.
  • We decrement the hit counter if the prediction and the outcome have opposite signs.

The figure shows that the ranking prediction (orange line) is strictly superior to simple prediction by 141% to 350% (depending on the day). This might not seem too impressive on the surface but if you dig a little deeper this is what it means:

  • Assuming 5 minutes of human analysis time per incident, a system with no ranking would give you a hit rate that finds 1 actionable item for every 20-30 incident investigations (or 0.4 incidents per analyst hour).
  • A system with predictive ranking would let you find 1 actionable item every 6-7 incidents investigations (2 incidents per analyst hour).

You can do the math in terms of cost savings: it’s huge! Most of the cost of network security systems is not the appliance or the software, but rather wasted analyst time!

You Should Not Just Take our Word for it!

The cyber-security arena is packed with technologies that claim they have the best solutions. That is why we encourage users to take the time to evaluate our predictive correlation and run it side-by-side with existing solutions. The outcome is always surprisingly good.

Collaborate with An Audit Log

Audit Log

cloud-basedcorrelationThe MetaFlows Security System allows organizations to grant access to multiple users for online collaboration in sharing sensor data and intelligence. This is a big advantage because it helps distribute workloads across departments and at different levels of the incident response process. One issue customers brought up was the lack of ability to know who took what action, and at what time they did the action. This is why we added the Account Audit Log feature. You can find this feature under Account -> Account Audit Log. With this new Audit Log, you can track most account actions, including:

  • Changes to contact information and subscription
  • All account access
  • Sensor restarts
  • Creating, changing, or deleting:
    • Sensors
    • Classifications
    • Snort Rules
    • Report Specifications

For every logged action, we track the user, time, and IP address from which these actions originated. We also provide extra details if available.

New Packet Logging and File Carving

carvingPacket Logging and File Carving

Being able to go back and look at the payloads or files transmitted on a network is extremely useful for several reasons:

  1. If you do not have the payload, you cannot really prove malicious intent, and legally you are on the hook.
  2. Payloads/Files are the ultimate forensic tool to decide if a particular incident is a false positive or a true positive.
  3. In more advanced systems payloads can also be used to find false negatives (things should have caused a security event but did not).

Obviously logging all data transferring on a network is challenging because disk space is limited and disks are relatively slow.

The MetaFlows Security System Logging Approach

Our overall approach to overcoming logging limitation is:

  • We store Payloads/Files that are associated with a specific security alert (using the time and the source/destination addresses and ports for identification)
  • When logging proactively (to also see Payloads/Files that do not involve a security alert), keep the disk at 90% utilization or below a certain number of Gigabytes by deleting the older logs.

This scheme gives you certainty of access if there is an incident and a time window to go back in time to look for certain things that might have been overlooked.

Recent Improvements

The Logging and File carving system has been vastly improved by the following:

  1. We now index the packets based on IP addresses using a proprietary approach. Instead of looking for particular packets in a big bucket full of files, the files are divided in smaller buckets each representing a subset of the addresses. This indexing scheme slows down packet logging a bit but makes looking for packets about 200 times faster!
  2. We added the ability to specify user-defined logging policies. Once a policy hits, the logging system prioritizes all packets for the matching policy and stores the Files/payloads in a separate high-priority repository which takes precedence over the normal logging. We will make a separate announcement on the policy specification because it is quite powerful and complex, and requires a dedicated post. For now, the only logging policy is to prioritize any packets involved in high priority events. In the future users will be able to customize more precise ad-hoc policies based on IP addresses, ports, and type of alerts.
The new carving system is backward compatible and automatically converts the existing packet logs stored on the sensor hard drive to the new indexing scheme. This process can take from a few minutes to days depending on your disk size. While this conversion takes place, queries on older logs may not return any data.

Got IPv6?

IPv6 support

???????????????????Many organizations are transitioning to IPv6 because it allows the address space to be managed more easily. One thing is for sure, hackers are on top of it; they are already serving Malware from IPv6-capable servers! It is therefore imperative that all the security software be IPv6 capable in order to avoid glaring security holes.

When it comes to IPv6 most people put the blinders on. Most security policies really just ignore it because it is not main-stream. But you can be sure that whatever is being ignored can be used against you. Ipv6 tunnels are proliferating and usually not monitored at all. They can easily be used to have a data exfiltration super-highway out your network.

The MetaFlows Security System can now work on both IPv4 and IPv6, without gaps in your security.