Recent blog entries for adulau

2013-04-01 Information Visualization Is Just A Starting Point

Keywords in Common Vulnerabilities and Exposures Quantité des déchets ménagers collectés sur la Province de Luxembourg - année 2012

Information Visualization Is Just A Starting Point

Information visualization is not an end but just a step to improve our understanding of data. Following a small discussion in the train about the visualisation of open data, I did a small experiment to analyse the statistics about the waste collection in my region. The result of this experiment is available along with some random notes. But the main question came from someone else looking at the visualization and basically told me: "I don't get it". He is right, the experimentation is just there to trigger more analysis (and sometime more visualization) with the objective to improve our understanding. Initially, the source of data is usually not analysed and sitting there waiting to be understood. Coming back to the data about waste collection, the initial discussion about the understanding or interpretation wouldn't be triggered if the first step of visualization is not done.

So in that scope, I tried a similar approach with a dataset I built from my cve-search tool. My idea was to see the terms used all the description of the Keywords in Common Vulnerabilities and Exposures (CVE). I did a first CVE terms visualization experiment and then I twitted about it. Then, this was triggering various explanations like why there is a predominance of some terms as commented by Steve Christey.

It clearly showed that is an iterative process especially to better understand the data. It's also an interactive process in order to improve the visualization and the data source. Following the good advise from Joshua J. Drake, I added a lemmatizer to keep only the root of each term and also exclude the standard English stopwords. With the visualization, we saw from some occurrences (e.g. unknown or unspecified) that the CVEs are based on incomplete information.

I'm quite sure that is not finished and just the beginning of more work and experiments in visualization. I read various books about information visualization but the result is often very static and you don't really see their iterative process to reach their visualization goals. Sometime, you just see a result without the process and the tools used to make the visualization happens.

At least with free software like D3.js, we have now a set of tools to understand how the visualization was built and maybe improve/discuss those visualizations. At least, if you want to play or improve the visualization of terms used for software vulnerabilities description, let me know.

You want an open mind, but not an empty head. Just because something is a new or fashionable alternative, doesn’t mean we need to get stupid when judging it. Edward Tufte.

Syndicated 2013-04-01 21:11:08 from AdulauWikiDiary: RecentChanges

2013-02-23 Vulnerability Management Is Just An Approximation

Everybody needs a hacker

Software Vulnerability Management Is Just A Huge Approximation

Approximation is a representation of something that is not exact. To be extremely exact vulnerability management is not even a mathematical approximation like we know it for Pi value. But from where this utterly huge approximation is coming from? The first origin is the inner definition of "vulnerability management". If you look at various definitions like the one from Wikipedia or some information security standards, you have something like "it's a process identifying → classifying → remediation → mitigation of software vulnerabilities". Many information security vendors might told that is an easy problem but you can ask yourself if this is an easy problem why so many organizations are still compromised with software vulnerabilities.

In my pragmatic eyes, it's very broad, so broad that a first reaction is to split the problems into parts that you can solve. If we just look at the initial step to identify software vulnerabilities.

To solve this problem, the first part is to discover, know and understand the software vulnerabilities. Everyone is discovering vulnerabilities everyday (just look at how many bug reports are going into the Linux Kernel bug tracking software) and very often when you report a bug, you don't even know if this is a software vulnerability. The worst part is that an organization (or an individual) doesn't exactly know what software they are running. If someone is telling you that they have a "software vulnerability management" software that is able to detect all the software running on a system, it's a lie. If such software would exist, you would have the perfect software that would be able to solve the virus detection issue while solving the Turing's halting problem. Just look at a simple software appliance and the set of software required to run the appliance.

Discovering vulnerabilities might be easy but it's difficult to be exhaustive. Even if a vulnerability is found, there is a market to limit their publications (like zero-day vulnerability market). For a named software, there is might be a large set of unknown vulnerabilities (I'm tempted to talk about Java but I think every software might fall into that category). Does this mean that you should give up? I don't think so. You must work on your vulnerability management but don't trust blindly solutions that claim to solve such issue.

Finally, my post is not a bashing post as it was an opportunity for me to talk about a side project I'm working to ease collecting and classifying Common Vulnerabilities and Exposures (CVE). The project is called cve-search and it's not a complete vulnerability management just a small tool to solve partially the identification and the classification part.

“When he time comes to leave, just walk away quietly and don't make any fuss.”– Banksy

infosec security vulnerability

Syndicated 2013-02-23 20:47:26 from AdulauWikiDiary: RecentChanges

2011-12-25 Against SOPA or How To Do Soap

I'm against SOPA... So I'll explain how to make soap with olive oil

One more time, some lobbyists try to regulate the Internet with some of the stupidest laws or rules. SOPA (in US) is again one of this tentative to break down the freedom of citizen worldwide to preserve some archaic business model. As I have a preference for concrete action leading to a direct social improvement, I'll explain how to do soap (it's better than SOPA and more useful, please note the clever inversion of the letters). My soap recipe is released under the public domain dedication license (CC0).

Stop SOPA make SOAPStop SOPA make SOAP

Safety Disclaimer

Doing soap is a chemical process that requires your full operating brain. Especially that you'll use sodium hydroxide that is a corrosive substance. So respect the proportions, the process and read the whole process multiple times before doing it. Wearing protective gloves and goggles is highly recommended. Avoid to use kitchen instruments in aluminum as it will be attacked by the sodium hydroxide.

Background of the chemical process

Doing soap is one of the first chemical process discovered by the humanity. The process is called saponification that is done by using a base to hydrolyze the triglycerides contained in the fats (organic or animal). This process generates a fatty acid salt along with the glycerol (the greasy touch of the soap). Each fat has a specific value for its saponification. The saponification value (usually called SAP in saponification tables) is expressed by the required volume of base (usually sodium hydroxide) to saponify 1 gram of fat. The saponification value is reduced to keep the resulting soap a bit fat (what is called the "excess fat"). I find it even convenient to keep a "safety" bound to ensure that the hydrolyze is complete and used the whole sodium hydroxide.

So that's the basis if you want to build your own soap, there are other rules to consider but for this recipe this is enough. In my case, I use olive oil as a fat. Easy to find and I have a preference for organic olive oil (to ensure that the oil producer is taking care of its environment). But you can use non-organic olive oil too (it's usually cheaper).

Ingredients

  • 1000 grams of olive oil
  • 124 grams of pure sodium hydroxide / NaOH (as the olive oil has a SAP factor of 0.134 and we want 7% of over fat → run bc and type (1000*0.134)*0.930) (total weight of fat *SAP factor for the fat)*(0.900<->0.960))
  • 350 grams of tap water (usually between 31% and 35% of the total fat. In this recipe ~ 1000*0.350)

Process

  • Put your protective gloves and goggles
  • Prepare the sodium hydroxide by putting the sodium hydroxide in water (!put the sodium hydroxide in water not the reverse!).
  • and monitor the temperature of the prepared sodium hydroxide to reach around 46-47 Celcius degree (it will start at 80 Celcius degree with the reaction).
  • At the same time, warm the olive oil until 46-47 Celcius degree.
  • When both are at the same temperature (around 46-47 Celcius degree),
  • you can start to mix (using a mixer speed up the process) the warmed olive oil by incorporating the prepared sodium hydroxide. (!use a large pot to avoid projection of the prepared sodium hydroxide while mixing!).
  • When you start to see that the mixture is becoming consistent (especially that you can see a trace while removing the mixer) it means that's you reach the critical point.
  • When you have an homogeneous consistence, you can put the result into a plate.
  • Put a plastic film into the plate touching the mixture (to avoid oxygen to be in contact with the prepared soap).
  • In the next hours, you'll the "gelification process" where the soap is becoming a gel (usually starting from the center).
  • After 24 hours, your soap is becoming harder. (see above picture)
  • You can can remove it from the plate and cut the forms you want from your block soap.
  • And the soap must dry for the next 4 weeks in a dry and clean place. (see above picture)

Tags: soap sopa freedom chemistry diy

Syndicated 2011-12-25 15:29:22 from AdulauWikiDiary: RecentChanges

17 Dec 2011 (updated 18 Dec 2011 at 17:11 UTC) »

2011-12-17 Certificate Revocation Reasons 2011

This page is too big to send over RSS.

Syndicated 2011-12-17 12:05:57 (Updated 2011-12-18 17:11:22) from AdulauWikiDiary: RecentChanges

2011-10-02 Try and Vet Tshirt Crypto Challenge Hack.lu2011 The Solution

Try and Vet T-Shirt Cryptographic Contest at Hack.lu 2011

The Challenge

What Did You Get During Hack.lu 2011?

From the hack.lu website, you got a text message including a message stream. During the conference, you got a t-shirt.

The horrible "Beer Scrunchie" subverted the hack.lu 2011 conference to hide some cryptographic materials. He especially abused the t-shirt for hack.lu 2011 to transmit under cover activities. We still don't know at which extend "Beer Scrunchie" abused the t-shirt. Everything is possible just like those trojan t-shirts discovered...

U2FsdGVkX19EAnHXVRgs2oajPS0zZ3+w8BlYdQbHMTI7GT9gvdgFkjtTarpNAmbz
ET8PRg72U8pydsLr4IaTt5n7fFz6jxyglU1ozZwjJhKAyPAftqxYvcnud4/cOiEV
2FutxaJYCORWsvQV+hi6j8LMqn5aJd7s2nhQ9BWji/ZjMZx/wXJVdCCmNL9HuWx9
q0KV/8nTaxOOEdGwENZT8rgSSb7qy5mcIlIBfdzqYAzynj8xLxHFmptNQfZaO3X0
MAbvS324WDeB3R5p6CaIDLeH95eN8jrqdXaDhxs1SrlJrq5inssTgsEttFUhHEe8
6unUI3i4sDeVvEcajMmxvKg0qQLqEkc56GXKXVuGYc+owEsgKW8JKk8DrfgbQMPy
mbaaN7h1PKjlXTIfkR9KXOMd0wy/KHEoM6FdWY1jjzB2Q9UODxgug6gNXciVpQB6
fpvlzvFkV8z8BfSMcDCo1GM6526hSYYtRF0RS3PoloSPjfvDCNVX86lMjKsx6etc
Wec6u4EuJVDI52dgSr3kslwlfswez4WM+H2cszKCf0xejql/tQsra6QAcj1JhSqD
C6AvtDV31IzLAhHy5Di4T1ONyk68WNU40BIsrNkb3lYFTtWtQeF5Z4DGwpcM9HKg
CbLIe9oiNONgrY+kn5RfkHgUaI/PbUQgWy/U6BkunbuqTuMXwiTeR3eaRwBnGQGJ
KL+w6duxhoZhCa9nrlr3I2Nx2l+bs9JIzp5h2nYIq6yhqAyQ6jE+lpAQk912FE1O
5AuOLW5bhMldPMVMlYlx6w==

Solution

The message on the website gave already some clues especially that:

If you decode the message encoded in Base64, you'll see that the stream of data in binary is starting in the following way : "Salted__…." That's the behaviour of the OpenSSL? salted encryption scheme prefixing with "Salted__" to announce that the first 8 bytes of the encrypted stream are reserved for the salt. This gives the indication that the message has been probably encrypted with an OpenSSL? tool or library. If you look carefully look at the encryption schemes available in OpenSSL?:

aes-128-cbc    aes-128-ecb    aes-192-cbc    aes-192-ecb    aes-256-cbc    
aes-256-ecb    base64         bf             bf-cbc         bf-cfb         
bf-ecb         bf-ofb         cast           cast-cbc       cast5-cbc      
cast5-cfb      cast5-ecb      cast5-ofb      des            des-cbc        
des-cfb        des-ecb        des-ede        des-ede-cbc    des-ede-cfb    
des-ede-ofb    des-ede3       des-ede3-cbc   des-ede3-cfb   des-ede3-ofb   
des-ofb        des3           desx           rc2            rc2-40-cbc     
rc2-64-cbc     rc2-cbc        rc2-cfb        rc2-ecb        rc2-ofb        
rc4            rc4-40 

There are not so many algorithms written by Bruce Schneier in a default OpenSSL? except Blowfish (bf-*). Usually cryptographer recommends to use the "default" mode and in this case, bf is Blowfish in CBC mode. So this is highly probable…

Where Is The Key?

As you didn't use the t-shirt until now, there is a good guess that the key is hidden somewhere. If you look carefully at the text in the back of the hack.lu 2011 t-shirt, you'll see many typographic errors. The interesting part is to compare the typographic errors from the original text as published by Phrack. Please note the typo in the URL (even if the URL works, doesn't mean that's the correct one ;-).

The original text from Phrack (original.txt)

This is our world now... the world of the electron and the switch, the beauty of the baud.
We make use of a service already existing without paying for what could be dirt-cheap if it
wasn't run by profiteering gluttons, and you call us criminals. We explore... and you call
us criminals. We seek after knowledge... and you call us criminals. We exist without skin
color, without nationality, without religious bias... and you call us criminals. You build
atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe
it's for our own good, yet we're the criminals. Yes, I am a criminal. 
My crime is that of curiosity. My crime is that of judging people by what they say and think, 
not what they look like. My crime is that of outsmarting you, something that you will
never forgive me for. 
I am a hacker, and this is my manifesto. 
You may stop this individual, but you can't stop us all... after all, we're all alike. 


The Conscience of a Hacker, The Mentor, January 8, 1986, 
http://www.phrack.org/issues.html?issue=7&id=3#article 

The text from the hack.lu 2011 t-shirt (modified.txt)

This is our world now... the world of the electron and the swich, the beauty of the baud,
We make use of a service already exeisting without paying for what could be dirt-cheep if it
was'nt run by profofiteering gluttons, and you call us cricriminal. We explore... and you call
us criminals. We seek after knowledge... and you call us criminals. We exist without skin
colo, without nationlity, without rrligious bias... and you call us crimnals. You build
atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe
it's for our own good, yet we're the criminals. yes, I am a criminal.
My crime is that of curiosity. my crime is that of judginfg people by what thy say and think,
not what they look like. my crime is that of outmarting you, something that you will
never forgive me for.
I am a hacker, and this is my manifasto.
you may stop this individul, but you can't stop us all... after all, we're all alike.


The Conscience of a Hacker, The Mentor, January 8, 1986, 
http://www.phrack.org/issues.html?issue=7$id=3#article 

So you can build a key from the differences but how? That's the most difficult part (as there are many different way to do it). As there is no natural way to generate a key, I decided to go for a long key that can be read easily from the original text. To build back the key from original to modified you can use word diff and use your favorite GNU tools for word diff. We just discarded the punctuation and we didn't care about the case sensitivity.

wdiff -i -3 original.txt modified.txt | egrep -o "(\[-(.*)-\])" | sed -e "s/-//g" | sed -e "s/\[//g" | sed -e "s/\]//" | sed -e "s/\.$//g" | sed -e "s/,//g" 
| sed ':a;N;$!ba;s/\n//g'

The key to decrypt the message generated from the above wdiff is the following:

switchbaudexistingdirtcheapwasn'tprofiteeringcriminalscolornationalityreligiouscriminalsjudgingtheyoutsmartingmanifestoindividualhttp://www.phrack.org/issues.html?issue=7&id=3#article

and to decrypt the message, you'll need to use OpenSSL? in the following way used the guessed parameters:

openssl enc -d -a -bf -in encrypted.txt -out decrypted.txt 

and the original decrypted message is:

I'm Beer Scrunchie and I'm the author or co-author of various block ciphers, pseudo-random number generators and stream ciphers.

In 2012, there will be two major events: the proclamation of a winner for the NIST hash function competition and probably the hack.lu 2012 infosec conference
.

I hope that my Skein hash function will be the winner.

If you are reading this text and be the first to submit to tvtc@hack.lu, you just won a hack.lu ticket for next year. If I'm winning the NIST competition wit
h my hashing function,
you'll get a second free ticket...

Bruce

I got one correct answer 5 days after the conference showing that the difficulty to get back the key was bound to the uncertainty of the key generation. Next year, it's possible that we make a multi-stage t-shirt challenge for hack.lu 2012… from something more easy to something very difficult.

Tags: crypto infosec ctf conference hacklu

Syndicated 2011-10-02 11:48:07 from AdulauWikiDiary: RecentChanges

4 Sep 2011 (updated 26 Jun 2012 at 19:03 UTC) »

2011-09-04 Information Security Is Not a Matter of Compliance

A Radio On a Piano

Information Security Is Not a Matter Of Compliance But a Matter of Some Regular and Boring Activities

Making conclusions from experience is not always a scientific approach but a blog is a place where to share experience. Today, I would like to share my past experience with information security and especially how much it's difficult to reach some security with the specific compliance detour proposed by the industry or even the society.

Compliance is a Different Objective Than Information Security

Many compliance mechanisms exist in the information security to ensure on paper the security of a service, a company, a process. I won't list all of them but you might know PCI-DSS, TS 101 456, ISO/IEC 27001 and so on… Very often the core target of a company is to get the final validating document at the end of the auditing process.

Of course, many of those validation processes are requiring many strong security requirements on the procedural aspect of the information security management within the company. This is usually a great opportunity for the information security department to increase somehow their budget or their attraction. Everything is nice. But usually when the paper work is finished, the company got their golden certificate and the investment in information security is just put aside.

But concrete information security is composed of many little dirty jobs that no one wants really do. Usually in the compliance documents those tasks are underestimated (e.g. a check-box at the end of a long list) or even not mentioned (e.g. discarded during the risk assessment because they seem insignificant). Those tasks are usually a core part of information security. Not only for protecting but also to detect misuse earlier.

I summarized the tasks in three large groups (it's not an exhaustive view) but show some of the core jobs to be performed in the context of protecting information systems:

Reading and Analyzing Never Ending Log Files

The log analysis is usually the main trigger to find a compromised system. When Clifford Stoll found that the system was compromised at LBL, it was due to a specific 75 cents accounting issue. Like the recent security breach at kernel.org discovered by an error in the log from a non-installed software (Xnest) or a pop up of an invalid certificate, that's how infection or compromised infrastructure get discovered.

But to discover those discrepancies, you need someone at the end. The answer, here, is not a machine to read your logs (I already hear the SIEM vendors claiming this can be automatized). It's a human having some knowledge (with some doubts) to pick something unusual that can lead to the detection of something serious.

The log analysis is a tedious work that needs curious and competent people. It's something difficult to describe in a compliance document. The analysis job can be boring and not really rewarded. That's why sometime you see the idea of "outsourcing" log analysis but can an outsourced analysis detect an accounting issue because he knows that some user is not working during that time shift?

IMHO, it's sometime better to invest into people and promote the act of regular logs analysis than pursue into an additional security certification without the real security activities associated.

Reducing the Attack Surface

The less software you have the better it is for its security. It sounds very obvious but that's a core concept. We pile more and more features in each software used. I never saw a control in a security standard or certification that recommends to have a policy to reduce software or remove old legacy systems. If you carefully look at "Systems Development Life Cycle", this always shows the perfect world without getting rid of old crappy code.

Maintaining the Software and Hardware

Maintaining software and hardware could fall into the category of "reducing the attack surface" but it's another beast, often under estimated in many security compliance processes. A software is like a living organism, you have to care of it. You don't acquire a tiger and put in your garden without taking care of it. Before maintaining, you obviously need to design systems with "flaw-handling in mind" as Marcus J. Ranum said or Wietse Venema or Saltzer and Schroeder in 1975 . In today's world, we are always not going in that direction so you have to maintain the software to keep out the daily security vulnerabilities.

The main issue with a classical information system is the interactions with the other systems and its environment. If you (as a security engineer) recommend to update a software in a specific infrastructure, you always hear the same song "I can't update it", "It will be done with the yearly upgrade" (usually taking 4 years), "Do you know the impact of this update on my software?" (and obviously you didn't write his software), "It's done" (while checking it's still giving the old version number), "It's not connected so we don't need to patch" (looking at the proxy logs you scare yourself by the volume of data exchanged) and … the classical "it's not managed by us" (while reading the product name in the title of the user who answers that).

Yes, upgrading software (and hardware) is a dirty job, you have to bother people, chase them every days. Even in information security, upgrading software is a pain and you usually break stuff.

All those dirty jobs are part of protecting information systems, we have to do them. Security certification is distracting a lot of professionals from those core activities. I know it's arduous to do them and not rewarded, but we have to do those tasks if we want to make the field more difficult for the attackers.

You might ask why a picture with a radio on a piano… both can do the same "music" but are operated in a different way. Just like information security on a system or an paper are done in two different ways.

Tags: infosec compliance

Syndicated 2011-09-04 16:17:30 (Updated 2012-06-26 19:03:20) from AdulauWikiDiary: RecentChanges

2011-05-22 Ease Your Log Analysis with Ranking

Apocalypse de milieu de terrain / Mittelfeldapokalypse (Tim Ernst)

Ease Your Log Analysis With BGP Ranking and logs-ranking

Raphael Vinot and I worked on a network security ranking project called BGP Ranking to track the malicious activities per Internet Service Provider (referenced with their ASN Autonomous System Number). The project is free software and can be downloaded, forked or updated at GitHub. As BGP Ranking recently reached a beta stage, we have now a nice set of data about the ranking of each Internet service provider in the world. Every day, we are trying to find new ways to use the dataset to improve our life and remove the boring work while doing network forensic.

A very common task when you are doing network forensic is to analyse huge stack of logs files. Sometime, you don't even know where to start as the volume is so important that you end up to look for some random patterns that might be suspicious. I wrote a small software called logs-ranking to prefix each line of a log file (currently only W3c (common/combined) logs files are supported) with the ASN and its BGP Ranking value. logs-ranking uses the whois interface of RIPE RIS to get the origin AS for IP address and the CIRCL BGP Ranking whois interface to get the current ranking.

To use it, you just to stream your log file and specify the log format (apache in this case).

cat ../logs/www.foo.be-access.log|  perl logs-ranking.pl -f apache >www.foo.be-access.log-ranked

and you'll get an output like this with the origin ASN and the ranking (a float value) prefixing the existing log line:

AS15169,1.00273578519859,74.125.... 
AS46664,1.00599888392857,173.242...

So now, you'll be able to sort your logs by the most suspicious entries at first (at least from the most suspicious Internet service provider):

sort -r -g -t"," -k2 www.foo.be-access.log-ranked

So this can be used to discriminate infected clients from Proxy logs that tries to reach bulletproof hoster where the malware C&C is hosted. Or infected machine on Internet trying to infect your latest web-based software… the ranking can be used for other purposes, it's just a matter of imagination.

Tags: networkforensic infosec freesoftware

Syndicated 2011-05-22 19:20:49 from AdulauWikiDiary: RecentChanges

6 Mar 2011 (updated 18 May 2011 at 22:10 UTC) »

2011-03-06 Why The Philosophical Works Should Be Free

A close look at a Welsh onion flower

Roberto Di Cosmo recently published a work called "Manifeste pour une Création Artistique Libre", the work is not really a manifesto in the traditional sense but more a work about the potential licensing scheme at the Internet age. My blog entry is not about the content of the work itself but more about the non-free license used by the author. On the linuxfr.org website many people (including myself) made comments about how strange is to publish a work about free works while the manifesto itself is not free (licensed under the restrictive CC-BY-NC-ND). The author replies to the questions explaining his rationals to choose the non-free license with an additional "non printing" clause to the CC-BY-NC-ND.

I have a profound respect to Roberto's works regarding the promotion and support to the free software community but I clearly disagree with the facts stating philosophical works must not have any derivative and cannot be a free work. I also know that Richard Stallman disallows derivative work on his various works. If you carefully check the history of philosophical works, there are a lot of essays from various philosophers having some revision due to external contributions (e.g. Ivan Illich has multiple works evolving over time due to interaction or discussions with people). It's true that the practice was not very common to publish about the evolution of the works. But that was mainly due to the slowness of the publishing mechanisms and not by the works themselves.

The main argument used to avoid freeing the works is usually the integrity of the author's work. A lot of works have been modified over time to reflect the current use of the language or make a translation to another language. Does this affect the integrity of the author's work? I don't think so. Especially for any free works (including free software) attribution is required in any case. So by default, the author (and the reader) would see the original attribution and the modification over time (recently improved in the free software community by the extensive use of distributed version control system like git).

Maybe it's now the time to reconsider that free software is going far beyond the simple act of creating software but also touching any act of thinking or creation.

Tags: freedom freesociety society copyright freesoftware

Syndicated 2011-03-06 21:35:57 (Updated 2011-05-18 22:10:50) from AdulauWikiDiary: RecentChanges

5 Mar 2011 (updated 27 Mar 2011 at 16:13 UTC) »

2011-03-05 Monitoring Memory of Suspicious Processes

Monitoring The Memory of Suspicious Processes

If you are operating many GNU/Linux boxes, it's not uncommon to have issues with some processes leaking memory. It's often the case for long-running processes handling large amount of data and usually using small chunk of memory segment while not freeing them back to the operating system. If you played with the Python "gc.garbage" or abused the Perl Scalar::Util::weaken function but to reach that stage, you need to know which processes ate the memory.

Usually looking for processes eating the memory, you need to have a look at the running process using ps, sar, top, htop… For a first look without installing any additional software, you can use ps with its sorting functionality:

%ps -eawwo size,pid,user,command --sort -size | head -20
 SIZE   PID USER     COMMAND
224348 32265 www-data /usr/sbin/apache2 -k start
224340 32264 www-data /usr/sbin/apache2 -k start
162444  944 syslog   rsyslogd -c4
106000 2229 datas     redis-server /etc/redis/redis.conf
56724 31034 datap    perl ../../pdns/parse.pl
32660  3378 adulau   perl pdns-web.pl daemon --reload
27040  4400 adulau   SCREEN
20296 20052 unbound  /usr/sbin/unbound
...

It's nice to have a sorted list by size but usually the common questions are:

  • Is that normal?
  • What's the evolution over time?
  • Does the value increased or reduced over time?
  • Which memory usage is evolving badly?

My first guess was to get the values above in a file, add a timestamp in front and make a simple awk script to display the evolution and graph it. But before jumping into it, I checked in Munin if there is a default plugin to do that per process. But there is no default plugin… I found one called multimemory that basically doing that per process. To configure it, you just need to add it as plugin with the processes you want to monitor.

[multimemory]
env.os linux 
env.names apache2 perl unbound rsyslogd

If you want to test the plugin, you can use:

%munin-run multimemory
perl.value 104148992
unbound.value 19943424
rsyslogd.value 162444
apache2.value 550055

You can connect to your Munin web page and you'll see the evolution for each monitored process name. After that's just a matter of digging into "valgrind --leak-check=full" or use your favorite profiling tool for Perl, Ruby or Python.

Monitoring Memory of Suspicious Processes

Tags: unix command-line memory monitoring

Syndicated 2011-03-05 11:16:52 (Updated 2011-03-27 16:13:08) from AdulauWikiDiary: RecentChanges

2011-01-01 Often I m wrong but not always

A shaky night

Often I'm Wrong But Not Always...

Prediction is very difficult, especially if it's about the future. Niels Bohr

Usually at the beginning of the year, you see all those predictions about the future technology or social comportment in front of those technologies. In the information security field, you see plenty of security companies telling you that there will be much more attacks or those attacks will be diversified targeting your next mobile phone or your next-generation toaster connected to Facebook. Of course! More malware or security issues will pop up especially if you increase the number of devices in the wild, their number of wild users and especially those wild users waiting to get money fast. So I'll leave up to the security companies waiting to make press release about their marketing predictions.

As we are at the beginning of a new numerical year, I was cleaning up a bit my notes in an old Emacs folder (from 1994 until 2001). I discovered some interesting notes and some drawings and I want to share a specific one with you.

In my various notes, I discovered an old recurring interest for Wiki-like technologies at that time. Some notes are making references to some Usenet articles (difficult to find back) and some references to c2.com articles how a wiki is well (un)organized. Some notes were unreadable due to the lack of the context for that period 1. There is even a mention to the use of a Wiki-like in the enterprise or building a collaborative Wiki website for technical FAQ. There are some more technical notes about the implementation of the software to have a wiki-like FAQ website including a kind of organization by vote. I let you find the today's website doing that…

Suddenly, in the notes, there is a kind of brainstorm discussion about the subject. The notes include some discussion from myself and from other colleagues. And there is an interesting statement about Wiki-like technology from a colleague : it's not because you like the technology that other people will use it or embrace it. That's an interesting point but the argument was used to avoid doing something or invest some times in Wiki-like approach. Yes, this is right but the question is more on how you are making stuff and how people would use it. My notes on that topic ended up with the brainstorm discussion. A kind of choke to me…

What's the catch? Not doing or building something to test it out. You can talk eternally about an idea if it is good or bad. But the only way to know if this is a good or bad idea is to build the idea. I was already thinking like that but I forgot that it happened to me… Taking notes is good especially when you learned that you should pursue and transform your ideas in a reality even with the surrounding criticisms.

My conclusion to those old random notes would be something like this:

If you see something interesting and you get a strong conviction that could succeed in one way or another, do or try something with it. (please note the emphasis on the do)

Looks like, I'll keep again this advise for the next years…

contribute startup innovation notes internet

Syndicated 2011-01-01 11:03:52 from AdulauWikiDiary: RecentChanges

114 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!