Taking Care of Business: The FTC Guidelines Part Two

20699620022_47e832b2ee_oThis post is a follow up to Part 1 of Taking Care of Business: Information Retention & Responsibility.  Here, we will be covering items six through ten, with a wrap-up of what this document means and what we can do to help you stay secure, ethically responsible, and on the right side of the FTC’s standards.

Item six on their list is, “Secure Remote Access to Your Network.”  Their bulletin points under this heading are Ensure End Point Security, meaning that you must control who can log on remotely and determining that they are doing it safely.  One way is to require two factor authentication for logins.  This demands that each user have the ability to generate a token on a separate device (a cell phone) and use that in combination with a token created by a key fob.  Biometrics and PINS are also considered types two-factor authentication.

The FTC would also prefer that businesses limit the amount of access that users have when away from the office.  This is the part where it is useful to discuss third party access.  By restricting the amount of data and the type of data that a third party or an off site worker can get to means that the truly important data has a better chance of staying safe.  Offering limited, one-time access is a great way to approach giving accessibility to a third party user.

Item seven on the list is, “Apply Sound Security Practices When Developing New Products” and the first subheading asks that you “train your engineers in secure coding.”  This is something tackled at the pre-design stage. It is up to your software developers to create code that is secure and will not unnecessarily put your business and clients at risk.  For that to happen, they must be trained effectively on how to do so.  A lack of education and foresight at this stage could be fatal before your product or service even launches.

The FTC’s second sub-heading involves following platform guidelines for security.  Secure development practice guidelines are out there, and available for use.  Failing to follow these can allow you to open your business up to man-in-the-middle attacks through mobile applications and other dangers.  It is not a requirement that one reinvent the wheel, but instead use resources that are already available in regards to creating secure software.

The last two bullet points are closely linked, “Verify that privacy and security features work” and “Test for common vulnerabilities.”  This is something that even the big guys miss, much less the smaller companies out there.  Often, it is smart to invest in an individual or company that provides penetration testing (pen testing).  It is their job to try to get into your network in as many ways as possible.  They will evaluate any weaknesses that exist within your code, and review the results with you.  Large companies such as Microsoft and others, offer Bug Bounties, meaning that if a hacker (with their permission and under their conditions) finds a bug or security issue with their software, that hacker is rewarded and the bug can be fixed.  Adobe, after some major security gaffs, has enlisted the help of the Bug Bounty program to help tighten up their software.

Issue eight of the last ten states, “Make Sure Your Service Providers Implement Reasonable Security Measures.”  Since points six and seven warn you to get your software and users in line, the natural progression leads to the idea that you should evaluate anyone that you do business with.  They advise that you “put it in writing” and “verify compliance.”  Your security measures matter as much as the security measures of the individuals that provide you with valuable services such as connectivity and cloud computing, just to name a few.  Taking someone’s word or accepting a hand-shake with the assumption that any promises they make outside of writing will be upheld is inadvisable at best.  Any company’s website should list their regulatory compliance information, which is easy to verify.  This is ours.

In point nine, “Put Your Procedures in Place to Keep Your Security Current and Address Vulnerabilities That May Arise,” they put their focus on not only how you go about maintaining your security practices, but also those of any third-party vendors you may work with.  This is where documentation is essential, to prove that should you be summoned to court that you have been maintaining a good-faith relationship in regards to your security.  Also, even after the pen-test phase, it is vital to keep on top of any perforations in your company’s defense against adversaries. If six months or nine years after a product is released, one you are responsible for, you must act upon any reports warning of a security risk with your product.  Put together a way of collecting these issues and a mechanism in place to address them.  Do not let them get lost in the shuffle and ignore them at your peril.  This of course, also requires that you stay on top of any third-party services or vendors you may use to make sure that they are making good on their promise of security to you.

Last but not least, the FTC advises that you, “Secure Paper, Physical Media, and Devices.”  Everything that was already recommended in regards to your network and digital data also applies to any hard copies.  The FTC asks that you, “securely store sensitive files,” “protect devices that process personal information,” “keep safety standards in place when data is en route,” “dispose of sensitive data securely.”  All of this may seem like common sense and somewhat of a no-brainer but it is worth remembering that if enterprises, both small and large, did these things the FTC would have never had to address gaffs in data containment by Rite Aid, CVS Caremark, and many unfortunate others.

We decided to use this precious blog space to bring these ten items to your attention, as it is our goal to keep you and your data safe. The MetaFlows MSS is continually evolving to help you better protect your enterprise from adversaries and the potential legal fallout from any success that they might have not had otherwise.  A tired truth is that a best defense is actually a good offense and in the world of business and information security, having the right service in place can make all the difference.

Feature Spotlight: Global Enterprise Solution

Global Enterprise Solution

The MSS Global Enterprise (MSS GE) is a complete turn-key security system intended for large Enterprise or Government networks, and includes advanced Malware/Botnet detection, Intrusion Prevention, Log Management/SIEM, and integrated vulnerability assessment. The MSS GE controller can be deployed either as a high performance Appliance (starting at 1200 Events/Second) or as an Amazon EC2 instance (AMI). The MSS GE sensors can be easily provisioned on off-the-shelf hardware (up to 10 Gbps per sensor) running Linux CentOS/RedHat, high-performance Appliances, VMware or on Amazon EC2.

ges image

Web Security Console

  • Real Time SIEM, Flow & Log management
  • Multi-user Online Collaboration
  • One-click Remediation
  • Highly Customizable
MSS GE Controller

  • Deploy as an Appliance or as an Amazon EC2 Instance
  • Predictive Event Correlation quickly finds Malware
  • Centralized Sensor Provisioning
Daily Intelligence Feeds

  • Behavioral Malware Detection
  • Zero-day/APT Intelligence
  • Vulnerability Scanning
  • Geo-location Intelligence

 

Old Dog, New Tricks: Reengineering Human Behavior Can Foil Phishing

No, UPS does not have a package waiting for you and that prince in Nairobi does not really want to give you $50,000, no matter how well thought out his plan is.

The article below details how, with just a bit of training, even your typical end-user can become more savvy and avoid those pesky phishing emails, thus saving your network from nonsense.

Reengineering Human Behavior Can Foil Phishing

Find out how the MetaFlows Security System, by utilizing Network Level AntiVirus and an Internal File Carver, can notify on and prevent pesky phishing scams.

And Now For Something Completely Technical: PF Ring 10 Gbps Snort IDS

You can always visit the MetaFlows Website for more information.

PF_RING based 10 Gbps Snort multiprocessing

Tested on CentOS 6 64bit using our custom PF_RING source

PF_RING load balances network traffic originating from an Ethernet interface by hashing the IP headers into N buckets. This allows it to spawn N instances of Snort, each processing a single bucket and achieve higher throughput through multiprocessing. In order to take full advantage of this, you need a multicore processor (like an I7 with 8 processing threads) or a dual or quad processor board that increases parallelism even further across multiple chips.

In a related article we measured the performance of PF_RING with Snort inline at 1 Gbps on an I7 950. The results were impressive.

The big deal is that now you can build low-cost IDPS systems using standard off-the-shelf hardware.

You can purchase our purpose-built Hardware with MetaFlows PF_RING pre-installed, giving you a low cost high performance platform to run your custom PF_RING applications on. If you are interested in learning more, please contact us.

In this article we report on our experiment running Snort on a dual processor board with a total of 24 hyperthreads (using the Intel X5670). Besides measuring Snort processing throughput varying the number of rules, we also (1) changed the compiler used to compile Snort (GCC vs. ICC) and (2) compared PF_RING in NAPI mode (running 24 Snort processes in parallel) and PF_RING Direct NIC Access technology (DNA) (running 16 Snort processes in parallel).

PF_RING NAPI performs the hashing of the packets in software and has a traditional architecture where the packets are copied to user space by the driver. Snort is parallelized using 24 processes that are allowed to float on the 24 hardware threads while the interrupts are parallelized on 16 of the 24 hardware threads.

PF_RING DNA performs the hashing of the packets in hardware (using the Intel 52599 RSS functionality) and relies on 16 hardware queues. The DNA driver allows 16 instances of Snort to read packets directly from the hardware queues therefore virtually eliminating system-level processing overhead. The limitation of DNA is that (1) supports a maximum of 16x parallelism per 10G interface, (2) it only allows 1 process to attach to each hardware queue and (3) it costs a bit of money or requires Silicom cards(well worth it). (2) is significant because it does not allow multiple processes to receive the same data. So, for example if you run “tcpdump -i dna0″, you could not also run “snort -i dna0 -c config.snort -A console” at the same time. The second invocation would return an error.

GCC is the standard open source compiler that comes with CentOS 6 and virtually all other Unix systems. It is the foundation of open source and without it we would still be in the stone age (computationally).

ICC is an Intel proprietary compiler that goes much further in extracting instruction- and data-level parallelism of modern multicore processors such as the i7 and Xeons.

All results are excellent and show that you can build a 5-7 Gbps IDS using standard off-the-shelf machines and PF_RING. The system we used to perform these experiments is below:

The graph above shows the sustained Snort performance of 4 different configurations using a varying number of Emerging Threats Pro rules. As expected, the number of rules has a dramatic effect on performance for all configurations (the more rules, the lower the performance). In all cases, memory access contention is likely to be the main limiting factor.

Given our experience, we think that our setup is fairly representative of an academic institution we have to admit that measuring Snort performance in the absolute is hard. No two networks are the same and rule configurations vary even more widely, nevertheless, the relative performance variations are important and of general interest. You can draw your own conclusions from the above graph; however here are some interesting observations:

  • At the high end (6900 rules) ICC makes a big difference by increasing the throughput by ~1 Gbps (25%)
  • GCC is just as good at maintaining throughput around 5 Gbps
  • PF_RING DNA is always better than PF_RING NAPI.

We describe below how to reproduce these numbers on Linux CentOS 6. If you do not want to go through these steps, we also provide this functionality through our security system (MSS) pre-packaged and ready to go. It would help us if you tried it and let us know what you think.

SC Magazine Review

Industry Innovators: Hall of Fame

The idea behind this Innovator’s service is that one can place sensors strategically around an enterprise and send the outputs to the cloud where advanced processing performs a host of security functions to result in more efficient, faster and more accurate functions than doing the same ones on-premises. Add global intelligence gathering to give depth and breadth to the core data available and you have the MetaFlows Security System (MSS).
In terms of Network Security, SC Magazine knows how to zero in on the important. As part of their end of year review of the best products, they felt inclined to mention us. And not just mention us, but review us.

Read more…

An IPS on Steroids

“An IPS on Steroids: MetaFlows Security System”

The secret behind the MetaFlows Security System (MSS) is that it really is a hybrid application. It collects data on the network and acts on malicious activity. So far, this is just about the same as any intrusion prevention system (IPS). But don’t be fooled. This is not just any IPS. Because it is a hybrid application – local and cloud-based – users get a lot of benefit from the cloud piece that are not available from a standard IPS. For example, a typical IPS gets its updates at whatever update interval the vendor determines. The updates usually are based on the efforts of the vendor’s threat assessment laboratory. Not so for MSS.
Peter Stephenson’s First Look at the MSS