Passive TLS mirroring

TLS inspection provides visibility of network threats hidden inside TLS encrypted sessions. This functionality is provided today by either Active or Passive TLS Interception.

Active TLS Interception is achieved by inserting a cryptographic system inline (for example a web proxy or man-in-the-middle appliance) between the clients and the servers. These inline systems can then make a copy of the clear text before/after encrypting/decrypting, inspect it locally and optionally send it out to another device (TLS mirroring).

Passive TLS Interception can be achieved by snooping the ephemeral session key of each TLS session and provide the key to the network monitoring system. The network monitoring system can then use the key to decrypt and inspect the TLS payloads.

Both of these techniques require extra hardware (whether it is virtual or not) and additional cryptographic operations. They also introduce significant complexities, some vulnerabilities and additional points of failure.

Passive TLS mirroring

Passive TLS mirroring is a hybrid between active and passive TLS interception that (1) is well suited for both public cloud and on-premise environments, (2) is much simpler to implement and (3) does not require additional hardware or cryptographic operations.

Passive TLS Mirroring leverages open source TLS implementations and works by making a copy of the clear text at the interface between the application (like httpd or curl) and the TLS library (like openessl). Coupled with a patent-pending algorithm to map the text to network flows, this unique mechanism implements a reliable TLS mirror of the decrypted TLS sessions without sharing keys or requiring additional cryptographic operations. The open source portion of the mechanism is exemplified below:

send(security_context,clear_text, destination) {
send_to_agent(clear_text); //added by MetaFlows
receive(security_context,encrypted_text) {
send_to_agent(clear_text);    //added by MetaFlows

As you can see, we simply provide a local agent (called sslcollector) that receives the clear text messages obtained with a simple instrumentation (send_to_agent) of the send and receive TLS API calls. The clear text is then forged into TCP/IP packets that mimic the corresponding TLS session. Optionally, we can also decode HTTP2 to appear as traditional HTTP 1.1.

Cloud-based passive TLS mirroring
Passive TLS Mirroring a public cloud like AWS

Passive TLS mirroring works especially well in cloud based deployments where the mirrored TCP/IP packets are sent (optionally encapsulated in a TLS tunnel) to an external monitoring system such as a remote IDS sensor. Each system being monitored would execute the command below:

sslcollector -I <iface> -t <ip_address> 

The IDS sensor at <ip_address> would replay the decrypted packets to <iface> by executing:

sslcollector -C -o <iface>

The IDS sensor simply receives the decrypted TCP/IP traffic without having to perform any decryption or having to fetch or manage ephemeral session keys in real time.

On-premise passive TLS mirroring
On premise Passive TLS Mirroring

Our technology also works on-premise. In its simplest form, the clear text obtained from the application is replayed through a secondary interface to an external switch. Each system being monitored would execute the command below:

sslcollector -I <iface1> -o <iface2>

The switch aggregates the mirrors and sends them to the IDS system. In a more port-efficient, on-premise implementation a simple Linux aggregation appliance can aggregate multiple mirrors before sending them to the switch.


The table below compares important features of existing TLS inspection approaches. As you can see from the table, Passive TLS Mirroring offers a number of unique advantages.

Does not affect availability or performance
TLS Mirroring in the cloud & on-premise
End-to-end encryption
Transparent to monitoring system or IDSPartial
No additional key management/vulnerability
No additional cryptographic operationsPartial
Comparison of TLS inspection approaches

An inline appliance whether it is a Man-in-the-middle (MITM) appliance or a web load-balancer such as Nginx affect performance and reliability. Even though these systems can mirror the clear text, (1) they are not available or difficult to implement in public clouds such as AWS and (2) are not transparent to the endpoints and the monitoring systems.

Ephemeral session key extraction and distribution shares some of transparency properties of passive TLS mirroring but suffers from the operational burden of (1) having to distribute potentially thousands of keys per second in a timely fashion to the IDS system and (2) forcing the IDS system to decrypt TLS packets (in addition to the all other traditional functions) . Obviously, the ephemeral keys (if leaked) can compromise forward security (someone can record the TLS traffic and later decrypt it with the ephemeral keys) thus introducing an additional security risk.

In summary, we believe that Passive TLS Mirroring has significant practical advantages over existing solutions (especially in cloud-based deployments).

How to get it

We offer passive TLS mirroring as an add-on to the MetaFlows Security System or as a stand-alone offering to augment any existing traffic monitoring systems. Passive TLS mirroring installs in minutes on any main-stream Linux distribution and recent Windows versions.

Please contact for further details and a trial.

AWS Guard Duty Support

Guard Duty is a basic, easy-to-use intrusion detection system provided by Amazon Web services. The main benefit of Guard Duty is that is extremely easy to setup (one click). Over time, the user can customize the detection and prevention rules to make the AWS findings more useful.

One big draw back of Guard Duty is that it provides very little forensic and correlation capabilities. As with most intrusion detection systems, the user can only look at single events in isolation without the ability of investigating why they where generated within a more general security context. Without this deeper understanding, single events are not very useful and, more importantly, by themselves may fail to reveal more important security insights.

MetaFlows now supports the analysis and correlation of AWS Guard Duty events thus providing a more advanced forensic capability through the MetaFlows Security System (which includes full packet logging, pcap generation and a number of advanced forensic tools) .


Guard Duty only works within each availability zone; therefore you will need a MetaFlows sensor running in the same availability zone where you want to run Guard Duty.

Below are the steps necessary to forward guard duty events to a MetaFlows sensor.

  • Enable Guard Duty.
  • Find out your Guard Duty MetaFlows port for the sensor to which you want to export Guard Duty events. Please contact MetaFlows’ support on how to find this value
  • Go to the Amazon SNS service and create a topic called guardduty Note: The Amazon SNS topic must be in the same Region as your AWS Guard Duty service.
  • Open the CloudWatch console.
  • In the navigation pane, choose Rules, and then choose Create rule.
  • From the Service Name menu, choose GuardDuty.
  • From the Event Type menu, choose GuardDuty Finding.
  • In Targets, choose Add target.
  • In Select Target, choose SNS topic.
  • In Select Topic, choose your SNS topic guardduty
  • Click configure details and enter a mnemonic name like Guarddutytometaflows
  • Go back to the SNS console and select the topic guardduty
  • Click on Create subscription
  • Select HTTPS protocol
  • Under endpoint enter<port> where <port> is the port number that you obtained in step 2 and click on Create subscription
  • Click on Subscriptions and verify the new subscription status is Confirmed

After the subscription is confirmed, you will start receiving Guard Duty Events in the MetaFlows system such as the ones below:

Samaple Guard Duty Events
Forensic Analysis Menu

As any other event type, Guard Duty events are aggregated by IP addresses, ports and classes so you can easily explore the event types hierarchically and correlate them with other MetaFlows events that share the same IP addresses. Importantly (as shown here) you can also run several forensic tools on the events including inspecting the payloads associated with the network activity that triggered the events. You can also easily start blocking the servers or the clients with our IPS system as well as initiate vulnerability scanning.

If you are interested in exploring how MetaFlows can augment AWS Guard Duty or other AWS security products, do not hesitate contacting us.

Thanks and Happy Hunting!

Web Application IDS Update

We updated the passive web application IDS rules from Spiderlabs. These rules should be much more useful. You can set the mode in the sensor configuration as Server or Client. In server mode the Web IDS will alert if we see any suspicious request that caused the server to respond with any HTTP code. In client mode we generate alerts only if a client malformed requests are (1) persistent across multiple servers, (2) trigger different Web IDS rules and (3) elicit error responses from the servers.

Server mode allows you to see how you web servers are being targeted and helps you create better Web application filtering rules. Client mode triggers more rarely and finds persistent attempts that go beyond occasional false positives or probing. If you see a client from your network triggering the We application IDS system, it is a good indication that it is running bad applications or it is accessing questionable services.

If you want to turn off some of the rules, search for the rule ID under Rules->ModSecurity Rules (after you select the appropriate sensor), slide the disable switch and Save and Restart.

Web Application IDS rules correlated with a Snort IDS rule and syslog messages.

To search for Web Application IDS rules only look for modsec using the Search form and set the event type to any. As shows above modsec alerts will also be correlated with your Emerging Threats IDS rules and other external syslog messages and therefore you will also see them if you search for correlated alerts.

Let us know if you have any questions.

Happy Hunting!

Cloud Agents Statistics

Many of you use the MetaFlows Security System to monitor cloud-based instances. Agents running on the instances send their network packets to the sensor for inspection. We now report the real time bandwidth each agent produces.

The sensors which receive network traffic from agents will now have a clickable button that reveals the agents’ statistics.

Clicking on the button, will open a table with each agent IP address, the source port being used and the Kbps being sent. These stats change in real time.

Hovering will reveal some passive host discovery information (DNS, MAC, DHCP info, HTTP Agents, Proxies, etc). Clicking on each agent IP will open up an historical report for that IP to see what that instance has been up to in the last day or so.

And, ho, yeah.. we added some links on the dashboard. One can be used to rate us for Amazon (30 seconds survey) and one for Gertner (this will take you 10-15 minutes or so but they give you a $25 gift card). We would appreciate if you could give us feedback.


Feel free to contact us if you have any questions. Happy Hunting!

MineMeld Support

MineMeld is an open source threat feed management system that gathers IP addresses, URLs, and domains which pose a significant network security threat. The threat feed sources can either be free, subscription-based or proprietary.  MineMeld re-scans the feeds at regular time intervals and continuously aggregates and updates the set of all threat indicators to be consumed by fierwalls, IDS/IPS, or any other security device.

MetaFlows now includes MineMeld public threat feeds to augment our existing intelligence sources. The public threat feeds amount to about 200,000 additional indicators updated every few hours. Users also have the ability to add site-specific (either subscription-based or private) MimeMeld sources.

IPv4 and URL/Domain indicators are treated differently.

IPv4 feeds

The default MineMeld IPv4 feeds processed by MetaFlows are below:

Source Current Number of Indicators 56953 61 4098 459 2473 20 105 1001 70666 315 770 113 136

MineMeld IPv4 addresses are compiled in a set of IDS/IPS rules designed to alert or block communications to blacklisted addresses.  MetaFlows uses a proprietary technique to quickly look through this huge list of addresses (140,000+) and therefore does not require specialized hardware for hi-speed networks.

reputation ruleset


The MineMeld IPv4 feeds are in the mmreputation.rules configuration file that can be accessed through the existing IDS rule management UI. The feeds are  not activated by default but users can activate them in IDS or IPS mode with just a few clicks. If enabled, these rules can be very useful to detect and/or prevent communication to questionable hosts on the Internet.


All the IP addresses are reduced to approximately 40 separate signatures. Each signature corresponds to a specific feed source (for example blocklist_de) or intersections of sources where the IPv4 address is present in more than one source (for example blocklist_de_alienvault.reputation). This decomposition provides additional operational awareness that can be used to prioritize which set of IPs to alert on or block. Enabling or blocking individual signatures therefore affects  a dynamically changing set of potentially thousands of IPs updated every few hours that map to a single threat feed or the intersection of multiple threat feeds.

Users also have the option of adding site-specify MineMeld IPv4 feeds to enable additional commercial MineMeld subscriptions independently purchased or other proprietary feeds.

Entering the URL as shown above, will automatically add the custom MineMeld reputation feed into the customer’s configuration and the local rule corresponding to the feed can then be managed as the other public MineMeld feeds.

URL and Domain FeedS

The MineMeld domain and URL feeds processed by MetaFlows are below:

Source Current Number of Indicators 33593 719 111 1903 11567 271 102880 101

These feeds are used to detect when:

  • A user issues an HTTP request to a URL or domain deemed to be malicious or
  • A user receives an email containing a malicious URL or link to a malicious domain whether or not the user clicks on the links.

When either of these two conditions occur, a high priority even is generated that can be used to block those specific communications.

There is also a an additional option to enable real time email notification. When bad emails are detected, users also get a warning email instructing them to discard the email.

MineMeld support will automatically be added next time your system self updates or if the sensor software is restarted.


Qradar Support

The MSS now fully supports the Qradar SIEM from IBM in CEF log format. Qradar is an excellent SIEM but requires classifying and mapping every event type it sees to an internal category. Qradar comes with a large number of common IDS rules (~50,000) already classified but not mapped. Besides having to manually map all these rules Qradar-MSS users would also need to continuously create additional Qradar IDs (Qids) to map to the much larger rule set used by the MSS (which changes daily). All this required a mechanism to update Qradar dynamically as new rules are published. With this update released today, no manual classification or mapping operations are necessary.

The MetaFlowsCEF log source automatically parses the 13 event types generated by the MSS and presents them in the Qradar default view. All MSS events are automatically mapped to new or existing Qids without any user manual operations. This makes the Qradar SIEM much easier to use.

To setup Qradar for the MSS perform the following steps:

    1. Download the MetaFlowsCEF log source to the Qradar box
    2. Import it with the command /opt/qradar/bin/ -action import -f
    3. Verify the import was successful and assign the MetaFlows sensors to this log source. Also make a note of the log source ID assigned by Qradar to the MetaFlowsCEF log source (something like 400[1-9]).
    4. Edit the file of all sensors and add the line export QRADAR=1.
    5. On one of the sensors you designate as the main Qradar updater, create the file /nsm/etc/qradar.ini to allow the sensor to communicate to the Qradar server (see an example below). Also add the line export QRADARLOGSOURCEID=<logsourceid>; where <logsourceid> is the number you noted in step 3. Probably something like 4001, or 4002, etc..
    6. Restart the sensors

Sample qradar.ini:

certificate_file = /nsm/etc/qradar.pem
auth_token = f3f1201b-3562-46d1-9b8b-9a1623870000
server_ip =
/nsm/etc/qradar.pem is a copy of the file located at /etc/httpd/conf/certs/cert.cert on your Qradar box
auth_token is obtained from your QRADAR application
server_ip is the IP address of your Qradar box.

The Qradar updater sensor will automatically add to Qradar new IDS rules added by the sensor’s rule update (which will be the same across all your sensors). This will happen through the Qradar API in the background as the sensor is running. The first time, the updater is run, it will have to catch up with about 50,000 definitions; so it will take many hours. Subsequent updates will take less time.

After each Qradar update, the email associated with the sensor owner will receive a summary of the update process.

Qradar integration is a bit complex; so do not hesitate to contact for any questions.


Search In Packet Logs

You can now search for arbitrary strings in the historical packet logs directly. The only requirements for this search is at least 1 IP address in addition to the search string.

For example in the search below we are looking for the IP address in any packet either sent or received by the host The search is also restricted to an hour worth of packets on 5/7/2018.


So why would you look for an IP address string in the packets? Well, this is normally done when there is more than one proxy and the system is not able to properly identify the proxy chain. In that case the offending IP will be recorded in the x-forwarded-for field of the http headers. Once you find the headers, you can find the real flows and then search again to get the data exchanged specifying the source and destination ports.

But this search feature is much more powerful than that; in fact you can also look retroactively in your packet history using full PERL regular expressions!

If you reached this far in this post, and you are an expert user, you will be wondering about the example above. The search string above would actually match more than because the dots really mean any character (for example 139a182b44c203 would also match). To be more precise you would need to enter:


But suppose you wanted to match a specific set of IP addresses

Using a regular expression you could search for:


Just imagine what you could search for when you are hunting down specific strings or patterns. So, this little new feature (also available through the CLI interface as the option -Q) should really expand the power of our historical packet logging system. It will let you easily dig in your network history for hidden clues of what happened in the past.

Websockets in server mode

You can now use Websockets and disable flash for good. We added support for Websockets instead of Flash for direct real time connection to our sensors to retrieve real time data and payloads. Sensors can be configured in client mode or server mode. In client mode sensors and browser both connect outbound as clients and get connected at the network level by our forwarder. Websockets have been working in this mode for quite sometime. Sensors can also be configured in server mode where the browser needs to connect to the sensor which acts a websocket server. Since sensors only have a self-signed SSL certificate, secure Websockets would not work. To get around this, we added a small (probably the smallest web server you will ever see) that only serves a root certificate authority to be installed in your browser. Go to:

https://<yours sensor ip> for non-Windows systems

https://your sensor ip>:81 for Windows systems

and install the certificate.

Then configure your sensor as a server and make sure to add its static IP address in the sensor configuration. After restarting the sensor, you will be able to connect in server mode using Websockets.

Proxy Detection Support

This has been awaited for some time. The MetaFlows Security System now detects proxied connections. The original IP is swapped with the proxy IP so that it can be correctly identified in the events. This has a dramatic effect on correlation since most proxied hosts only proxy http and use their real IP for DNS and other communications. Using the real IP for correlation and analysis will correctly correlate IDS http events and file downloads with IDS events and service discoveries triggered by different protocols.

When a proxied host is detected, a message of the forms xff=<realip><-<proxyip> is appended to the event and the proxy IP is replaced. So, you will see the real IP not the proxy.

When you analyze the packets data, the system automatically switches back to the proxy IP to look for the packets containing the proxy IP rather than the real IP (since the packets are stored before the IP is replaced).

Here is a real example of two events related to (the real IP) downloading suspicious content through the proxy


Notice that when we detect a proxy a P is associated with the proxied host.

This feature will be available as soon as your sensor is restarted or self-updates. Let us know if you have questions.

Happy hunting!

Your dedicated MetaFlows Team.


Splunk App

We have developed a Splunk network security app available at


It receives events generated by the MetaFlows sensors and breaks them down by the following types:

  • Multisession Analysis
  • High Priority Events
  • IDS Events
  • Network Logs (3rd party logs sent to the sensors)
  • File Transmission Analysis
  • User Discovery
  • Service Discovery
  • Host Discovery
  • Mac Discovery
  • Suspicious URL Transmission Analysis
  • IPS Notifications
  • User Rankings
  • Modsecurity

From the app you can either drill down on Splunk itself or jump to the MetaFlows console to gather more forensic information like packet payloads.

You can install the app by using the Splunk application management tools. In order to send event to Splunk you need to add a configuration line in your /nsm/etc/ startup script of your sensors. The SSL-encrypted syslog messages are sent to the MetaFlows Splunk App through TCP port 3015 (please make sure you sensor can communicate on this port).

It is a early beat version, please let us know how you like it.

Please see more details at

Happy Hunting!

The MetaFlows Team.