After reading the Every Email In UK To Be Monitored article and its comments over at Slashdot I once again felt like encrypting each and every Email I send using GPG/PGP. Now for this encryption to work the person I am sending a message to would need to have GPG/PGP set up too. A lot of technical-minded people already have this set up, but I can not expect everyone to be using encryption.
The reason for not everyone using GPG/PGP for encrypting their emails might be that, even though GPG/PGP have become a lot more usable for the end-user in the last few years, these programs are probably still too technical and thus hard to understand for non-technical users.
This is when I thought a little about how people could be made using public key encryption for E-Mails. After a bit of brain-storming an idea came to my mind, an idea I would like to present you with.
Basic idea
What about creating a program acting as both SMTP and POP3/IMAP proxy server that included all the logic to do encryption and would encrypt/decrypt messages transparently?
If this logic was moved out of Email clients we could get a solution working universally for each and every Email client out there.
2008-10-16
2008-09-18
EuroparlTV for everyone? No, only for users of proprietary software!
The European Parliament (EP) has just recently started a new service: EuroparlTV. A web-TV service which should give citizens of the European Union (actually everyone around the world) a way to inform themselves about how the EP works, what it does, and so on.
After I first read these news over at heise (german) I was impressed, but started to fear that yet again some sort of government has invested in proprietary software and is able to bring its services only to users of such software. Seconds later my fears became reality.
EuroparlTV seems to work only for users of either Adobe's proprietary Flash player (via the proprietary Adobe Flash file format) or users of Microsoft's Windows Media Player (via the proprietary WMV file format).
What this means to an open web, that is usable for everyone, should be clear.
Basically this is a service all citizens of the European Union pay for, but some cannot use. Is this really how governments (and the EP is some sort of government) should treat their citizens? Rather not.
On the one hand the European Commission is fighting vendor lock-in and monopoles, but on the other hand it directly helps these vendors by creating such services. Not a smart move in my opinion, neither is it understandable.
What I am asking myself though is why the EP was unable to create such a service, which itself could be quite interesting, without having all users of that service use proprietary software?
Is it so hard to deliver the service in a free (as in freedom), standardized format?
I will let answering these questions to you, but keep in mind that there are alternatives to this whole proprietary mess, like Ogg, which are completly free.
Personally I am pretty disappointed by this move. However, I hope that I at least informed people that there is a problem with EuroparlTV.
Putting it simple and short this way the EP does a great deal with helping vendor lock-in whilst fighting the freedom of its own citizens. Even though it should be the other way round.
After I first read these news over at heise (german) I was impressed, but started to fear that yet again some sort of government has invested in proprietary software and is able to bring its services only to users of such software. Seconds later my fears became reality.
EuroparlTV seems to work only for users of either Adobe's proprietary Flash player (via the proprietary Adobe Flash file format) or users of Microsoft's Windows Media Player (via the proprietary WMV file format).
What this means to an open web, that is usable for everyone, should be clear.
Basically this is a service all citizens of the European Union pay for, but some cannot use. Is this really how governments (and the EP is some sort of government) should treat their citizens? Rather not.
On the one hand the European Commission is fighting vendor lock-in and monopoles, but on the other hand it directly helps these vendors by creating such services. Not a smart move in my opinion, neither is it understandable.
What I am asking myself though is why the EP was unable to create such a service, which itself could be quite interesting, without having all users of that service use proprietary software?
Is it so hard to deliver the service in a free (as in freedom), standardized format?
I will let answering these questions to you, but keep in mind that there are alternatives to this whole proprietary mess, like Ogg, which are completly free.
Personally I am pretty disappointed by this move. However, I hope that I at least informed people that there is a problem with EuroparlTV.
Putting it simple and short this way the EP does a great deal with helping vendor lock-in whilst fighting the freedom of its own citizens. Even though it should be the other way round.
2008-09-10
sptest - a Python unittest extension
Even though this is meant to be an introduction to sptest, I want to start off by letting you know why I wrote this extension to the Python unittest module.
I am currently working on a (still private) project that uses Python's unittest module and the underlying framework. Even though unittest is a great utility for creating unit tests I found that the output it generates is unusable for me. I wanted something different though. Maybe a bit more aesthetic than the simple command line output unittest provides.
So I started off writing a class extending unittest.TestResult to fit my needs. I soon realized that interfacing with this part of unittest is not as easy as it could be, but I still continued to write my class.
After two hours of hacking I noticed that this class had become a monster. It was huge and I felt uncomfortable having such a huge class lying around somewhere in a "runtests.py" file for the only reason of having that pretty output.
This was the point when I decided to move all that code into a separate project and try to come up with a more intuitive API. This was the second when sptest was born, about 5 hours ago.
What I did come up with is a small Python module that makes customizing the way unit test results are presented (or stored) easier. It currently includes two output handler classes. One providing fancy CLI output on ANSI terminals and the other one providing XML output.
Additional output handler classes could store the result of the unit tests in a database or send it to a central point on the network, but implementing that is up to someone else, for now.
Running unit tests with sptest is as simple as calling:
By default the FancyCLIOutput handler class will be invoked and you will see why the handler is called the way it is immediatly.
In order to generate an XML file containing the test results one just has to modify the call to sptest to look like this:
sptest also provides support for preparation and cleanup functions. The only thing you have to do is define these functions and adjust the arguments passed to TestMain accordingly.
Most of the code is already documented and a doxygen configuration file for generating the html documentation comes with the code. Also, two examples are included that show how to use sptest.
I am currently working on a (still private) project that uses Python's unittest module and the underlying framework. Even though unittest is a great utility for creating unit tests I found that the output it generates is unusable for me. I wanted something different though. Maybe a bit more aesthetic than the simple command line output unittest provides.
So I started off writing a class extending unittest.TestResult to fit my needs. I soon realized that interfacing with this part of unittest is not as easy as it could be, but I still continued to write my class.
After two hours of hacking I noticed that this class had become a monster. It was huge and I felt uncomfortable having such a huge class lying around somewhere in a "runtests.py" file for the only reason of having that pretty output.
This was the point when I decided to move all that code into a separate project and try to come up with a more intuitive API. This was the second when sptest was born, about 5 hours ago.
What I did come up with is a small Python module that makes customizing the way unit test results are presented (or stored) easier. It currently includes two output handler classes. One providing fancy CLI output on ANSI terminals and the other one providing XML output.
Additional output handler classes could store the result of the unit tests in a database or send it to a central point on the network, but implementing that is up to someone else, for now.
Running unit tests with sptest is as simple as calling:
sptest.TestMain(TestSuite).run()
By default the FancyCLIOutput handler class will be invoked and you will see why the handler is called the way it is immediatly.
In order to generate an XML file containing the test results one just has to modify the call to sptest to look like this:
sptest.TestMain(TestSuite, output_class=sptest.output.XMLOutput).run()
sptest also provides support for preparation and cleanup functions. The only thing you have to do is define these functions and adjust the arguments passed to TestMain accordingly.
Most of the code is already documented and a doxygen configuration file for generating the html documentation comes with the code. Also, two examples are included that show how to use sptest.
2008-09-02
UPDATE: Google Chrome: Good or evil? -- GOOD!
UPDATE: You can find the update to this article at its bottom.
Even though Google's slogan is "don't be evil" I am not entirely sure whether this also applies to their newest development: the Google Chrome browser.
The announcement over at the Official Google Blog tells us that Google is about to release a Free Software-based browser. When I first read the announcement I wasn't too impressed reading that Google has actually built a browser, this was logical and I have been expecting this move for years. Also, reading that they based their browser on Free Software didn't impress me too much either, but then I found the comic.
Even though Google's slogan is "don't be evil" I am not entirely sure whether this also applies to their newest development: the Google Chrome browser.
The announcement over at the Official Google Blog tells us that Google is about to release a Free Software-based browser. When I first read the announcement I wasn't too impressed reading that Google has actually built a browser, this was logical and I have been expecting this move for years. Also, reading that they based their browser on Free Software didn't impress me too much either, but then I found the comic.
2008-08-31
Autoconf and Python: checking for modules
I am currently writing a Python application that makes use of GNU Autotools as build system and noticed that determining whether a specific Python module is installed is not that easy and no usable Autoconf macro exists. So I came up with my own solution, which I would like to share with you.
The AC_CHECK_PYTHON_MODULE macro takes two arguments: The module name and optionally the variable name holding version information. This way it is not only possible to determine whether a module is installed (ie. loads in Python) on the current system, but also retrieve version information from that module.
The following examples checks whether the Crypto module is installed and retrieves its version information from Crypto.__version__:
The macro itself does never report and error, but rather only a found/not found result. Error checking is up to the user and can be done via these two Autoconf variables:
PYTHON_<MODULE_NAME> is set to "1" if the module is present and "0" if not present.
PYTHON_<MODULE_NAME>_VERSION is only set when the version variable argument has been set and contains the version information of the module, if the module been found. If the module is not present this variable is also set to "0".
The version variable argument is optional as I wrote, so the following invocation works too and only checks whether the distutils module is present:
As I wrote earlier in this article I would like to share this macro with you. You can download it here.
The AC_CHECK_PYTHON_MODULE macro takes two arguments: The module name and optionally the variable name holding version information. This way it is not only possible to determine whether a module is installed (ie. loads in Python) on the current system, but also retrieve version information from that module.
The following examples checks whether the Crypto module is installed and retrieves its version information from Crypto.__version__:
AC_CHECK_PYTHON_MODULE(Crypto, __version__)
The macro itself does never report and error, but rather only a found/not found result. Error checking is up to the user and can be done via these two Autoconf variables:
- PYTHON_<MODULE_NAME>
- PYTHON_<MODULE_NAME>_VERSION
PYTHON_<MODULE_NAME> is set to "1" if the module is present and "0" if not present.
PYTHON_<MODULE_NAME>_VERSION is only set when the version variable argument has been set and contains the version information of the module, if the module been found. If the module is not present this variable is also set to "0".
The version variable argument is optional as I wrote, so the following invocation works too and only checks whether the distutils module is present:
AC_CHECK_PYTHON_MODULE(distutils)
As I wrote earlier in this article I would like to share this macro with you. You can download it here.
2008-08-22
Debian GNU/Linux 5.0 ("lenny") on a Samsung P55-Pro T8100 Sevesh
I have recently bought a new laptop, a Samsung P55-Pro T8100 Sevesh. As I was not able to find an installation report for this model anywhere on the internet I thought writing one myself is a good idea. This way people interested in getting this laptop or installing GNU/Linux on it can get some information.
The article covers both the hardware configuration of the laptop itself, a list of which features of the laptop do work and which don't (do not be afraid, most things work perfectly well out of the box) and finally a short installation report.
The article covers both the hardware configuration of the laptop itself, a list of which features of the laptop do work and which don't (do not be afraid, most things work perfectly well out of the box) and finally a short installation report.
2008-07-03
Is trying to fix (E)SMTP really worth it? [part 2 - infrastructure]
[digg=http://digg.com/security/Is_trying_to_fix_E_SMTP_really_worth_it_part_2]This article is the second in my series about the flaws of (E)SMTP, the whole Internet mail infrastructure and how it could possibly be fixed. The main focus of this part is a new approach to the infrastructure which should help making emailing more secure, reliable and less spam-prone.
The first article can be found here and points out flaws and problems in the current systems.
The first article can be found here and points out flaws and problems in the current systems.
2008-06-23
Status update
It has been quite a while since I last wrote an article and published it here.
It's not like I got tired of blogging. The reason why there hasn't been an update for such a long time is that I was doing my final exams in the past two months.
After passing my exams on Friday I should have time to write some articles again, so watch out for new articles here.
It's not like I got tired of blogging. The reason why there hasn't been an update for such a long time is that I was doing my final exams in the past two months.
After passing my exams on Friday I should have time to write some articles again, so watch out for new articles here.
2008-04-06
Why are hardware manufacturers keeping specs to themselves?
This is one question I have been interested in ever since I started using GNU/Linux.
Just think about it for a moment. About 20 years ago you got specifications for pretty much every piece of hardware you bought. You were given exact instructions on how to use the hardware you just bought, not only how to install it. Things have changed since then.
If you buy any piece of hardware today you actually have to expect not to get any documentation on how to "talk" to your new toy. You are only given a CD (sometimes even only a link to a homepage) containing drivers for a few specific operating systems, usually only Microsoft Windows.
Now I am no driver hacker and so I probably wouldn't be able to implement a driver for anything on my own anyways, but the Free Software community would largely benefit from hardware documentation, as there are a lot of capable driver hackers out there.
This is not a problem that only affects the Free Software community though. There are a lot of pieces of hardware which do not work on recent proprietary operating systems anymore due to lack of support by its manufacturers.
At least this problem would not exist for Free Software operating systems, such as GNU/Linux, if hardware makers would publish documentation of their hardware. The people still using devices which are well beyond their end-of-life could implement drivers on their own, not being dependent on anyone.
What I am really wondering about in this case is why hardware companies are unable to coin standards for accessing devices of the same class. It works perfectly well for USB (take USB mass storage devices as an example) and I do not understand why there can't be standardized interfaces to other hardware, such as network adapters, as well. On a very-low level these standardized interfaces do work. Just think of PCI, PCI Express or AGP.
Actually, if you think about this for a few more seconds you should realize one thing: Having standardized interfaces for devices of the same class would cut a lot of costs for hardware makers. Why? Oh well, if they design a brand new networking chip and still implement the given standard there would be no need of writing a new driver. Wait, there would be no need for per-device drivers at all. Implementing a common driver that accesses the standardized interface would be enough, for a whole range of devices.
So what am I asking of hardware makers? I would love to see companies creating devices of the same class to get together, create standardized interfaces, publish them and implement them in their new devices.
I know, this is not likely to happen anytime soon, so a more realistic approach is asking for Free Software drivers and/or documentation.
Personally I have stopped buying hardware which "works" with GNU/Linux, I have come to the point where I try only to buy hardware which either comes with Free Software drivers from the manufacturer or documentation which allows implementation of Free Software drivers.
This is probably the best way of showing these companies what you demand: Freedom.
Just think about it for a moment. About 20 years ago you got specifications for pretty much every piece of hardware you bought. You were given exact instructions on how to use the hardware you just bought, not only how to install it. Things have changed since then.
If you buy any piece of hardware today you actually have to expect not to get any documentation on how to "talk" to your new toy. You are only given a CD (sometimes even only a link to a homepage) containing drivers for a few specific operating systems, usually only Microsoft Windows.
Now I am no driver hacker and so I probably wouldn't be able to implement a driver for anything on my own anyways, but the Free Software community would largely benefit from hardware documentation, as there are a lot of capable driver hackers out there.
This is not a problem that only affects the Free Software community though. There are a lot of pieces of hardware which do not work on recent proprietary operating systems anymore due to lack of support by its manufacturers.
At least this problem would not exist for Free Software operating systems, such as GNU/Linux, if hardware makers would publish documentation of their hardware. The people still using devices which are well beyond their end-of-life could implement drivers on their own, not being dependent on anyone.
What I am really wondering about in this case is why hardware companies are unable to coin standards for accessing devices of the same class. It works perfectly well for USB (take USB mass storage devices as an example) and I do not understand why there can't be standardized interfaces to other hardware, such as network adapters, as well. On a very-low level these standardized interfaces do work. Just think of PCI, PCI Express or AGP.
Actually, if you think about this for a few more seconds you should realize one thing: Having standardized interfaces for devices of the same class would cut a lot of costs for hardware makers. Why? Oh well, if they design a brand new networking chip and still implement the given standard there would be no need of writing a new driver. Wait, there would be no need for per-device drivers at all. Implementing a common driver that accesses the standardized interface would be enough, for a whole range of devices.
So what am I asking of hardware makers? I would love to see companies creating devices of the same class to get together, create standardized interfaces, publish them and implement them in their new devices.
I know, this is not likely to happen anytime soon, so a more realistic approach is asking for Free Software drivers and/or documentation.
Personally I have stopped buying hardware which "works" with GNU/Linux, I have come to the point where I try only to buy hardware which either comes with Free Software drivers from the manufacturer or documentation which allows implementation of Free Software drivers.
This is probably the best way of showing these companies what you demand: Freedom.
2008-03-28
Free Software Supporter
I was quite stunned when I noticed that the Free Software Foundation (FSF) has recently started a new monthly-published newsletter, called the Free Software Supporter.
The reason I was amazed is not the fact that the FSF is now publishing such a newsletter, but rather the fact that I did not hear about that yet. Basically, the Supporter is about informating the Free Software enthusiasts about recent happenings and the work of the FSF, the GNU project and the global Free Software community.
It seems as if I am not the only person that is excited about the supporter, as Joshua Gay, who apparently is writing the Supporter, also seems to like it, as he writes in a blog post:
You can sign up to receive the Supporter via email on a monthly basis at http://lists.gnu.org/mailman/listinfo/info-fsf and you can read the first issue online at http://lists.gnu.org/archive/html/info-fsf/2008-03/msg00000.html.
Also, if the Supporter looks like an interesting read to you, you may as well enjoy the monthly newsletter the FSF Europe publishes. The FSFE Newsletter can either be read online or you can sign up for the FSF Europe press-release mailing list.
Personally I believe both newsletters are worth reading and give you a great overview of what has happened in the past month, what is going to happen and the work done by the FSF and FSF Europe.
The reason I was amazed is not the fact that the FSF is now publishing such a newsletter, but rather the fact that I did not hear about that yet. Basically, the Supporter is about informating the Free Software enthusiasts about recent happenings and the work of the FSF, the GNU project and the global Free Software community.
It seems as if I am not the only person that is excited about the supporter, as Joshua Gay, who apparently is writing the Supporter, also seems to like it, as he writes in a blog post:
I hope that you enjoy the Supporter. I am looking forward to reflecting each month upon the work of the FSF, the GNU project, and the global free software community. I only hope that the number of highlights I add each month will continue to grow as quickly as the community is growing. In either case, we hope to keep it short and we hope to keep you informed.
You can sign up to receive the Supporter via email on a monthly basis at http://lists.gnu.org/mailman/listinfo/info-fsf and you can read the first issue online at http://lists.gnu.org/archive/html/info-fsf/2008-03/msg00000.html.
Also, if the Supporter looks like an interesting read to you, you may as well enjoy the monthly newsletter the FSF Europe publishes. The FSFE Newsletter can either be read online or you can sign up for the FSF Europe press-release mailing list.
Personally I believe both newsletters are worth reading and give you a great overview of what has happened in the past month, what is going to happen and the work done by the FSF and FSF Europe.
2008-03-27
Is trying to fix (E)SMTP really worth it? [part 1]
[digg=http://digg.com/security/Is_trying_to_fix_E_SMTP_really_worth_it_part_1]
This one question has been in my mind for quite some time already. I mean, everyone uses SMTP (knowingly or not) when sending out emails and everyone sending emails also knows what SPAM is and receives SPAM messages.
However, few know how old SMTP actually is, and that, even though it serves everyone well, it has been designed in a time when everyone was thinking of Spam as canned meat. Back in 1982 SMTP was a great achievement and a lot of kudos should go to its creators, but now, in 2008, SMTP has become more of a liability than a great tool.
Originally, I wanted to write a single article covering all shortcomings of SMTP and possible solutions to these problems, but while writing the article a lot of text came up, so this is the first of two articles I am going to write on this topic. The first part is about the problems with SMTP and how fix-ups for SMTP are, even though they do work to some extent, a proper solutions to today's issues.
Due to the way SMTP was designed and the way the Internet was back then it is prone to various things, like SPAM messages, sender spoofing, data manipulation and so forth. A few attempts have been made at fixing some of the shortcomings of SMTP, like ESMTPA (SMTP-AUTH) or SPF, Callback Verification, and DKIM, but none of them has really fixed all problems that exist and all of these modifications are in my opinion mere workarounds. Let us have a look at why both SPF and DKIM fail to fix the all problems SMTP has right now.
This one question has been in my mind for quite some time already. I mean, everyone uses SMTP (knowingly or not) when sending out emails and everyone sending emails also knows what SPAM is and receives SPAM messages.
However, few know how old SMTP actually is, and that, even though it serves everyone well, it has been designed in a time when everyone was thinking of Spam as canned meat. Back in 1982 SMTP was a great achievement and a lot of kudos should go to its creators, but now, in 2008, SMTP has become more of a liability than a great tool.
Originally, I wanted to write a single article covering all shortcomings of SMTP and possible solutions to these problems, but while writing the article a lot of text came up, so this is the first of two articles I am going to write on this topic. The first part is about the problems with SMTP and how fix-ups for SMTP are, even though they do work to some extent, a proper solutions to today's issues.
Due to the way SMTP was designed and the way the Internet was back then it is prone to various things, like SPAM messages, sender spoofing, data manipulation and so forth. A few attempts have been made at fixing some of the shortcomings of SMTP, like ESMTPA (SMTP-AUTH) or SPF, Callback Verification, and DKIM, but none of them has really fixed all problems that exist and all of these modifications are in my opinion mere workarounds. Let us have a look at why both SPF and DKIM fail to fix the all problems SMTP has right now.
How to reject mails containing OOXML attachments using Exim4
I finally did it. I modified my Exim's configuration to reject any mail with an OOXML attachment (ie. docx, pptx, xlsx).
There are two main reasons for this step. First of all I am not able to open these files and I believe I will not be able to do so and get them properly rendered anytime soon. Secondly, people using the new Microsoft Office suite seem to be ignorant enough to think everyone is able to view those files, which is not the case.
I am trying to make one point here:
People sending emails to other people should always send files in internationally standardized formats (open formats), such as ODF or PDF, so that everyone is able to open them and use the attachments. Also, I am trying to make people sending out emails in those formats aware of the fact that not everyone can open them, not everyone wants to invest a lot of money in new applications and that some people generally prefer Free Software and that there is no way of using those files using Free Software right now.
Enough for the introduction, I wanted to explain how to achieve this behavior using Exim4:
Putting this snippet in the acl_check_content section of your exim4.conf should do the trick.
Oh, and while I am at it, you can easily use this snippet to drop mails with other attachments, based on the file extension.
For example, in order to reject all mails containing WMV files just use demime = wmv.
Note that this snippets checks for a specified file extension instead of a MIME type. People still can get mails through in those formats if they modify the file extension, so do not use this method as a security measure.
There are two main reasons for this step. First of all I am not able to open these files and I believe I will not be able to do so and get them properly rendered anytime soon. Secondly, people using the new Microsoft Office suite seem to be ignorant enough to think everyone is able to view those files, which is not the case.
I am trying to make one point here:
People sending emails to other people should always send files in internationally standardized formats (open formats), such as ODF or PDF, so that everyone is able to open them and use the attachments. Also, I am trying to make people sending out emails in those formats aware of the fact that not everyone can open them, not everyone wants to invest a lot of money in new applications and that some people generally prefer Free Software and that there is no way of using those files using Free Software right now.
Enough for the introduction, I wanted to explain how to achieve this behavior using Exim4:
deny message = Message contains attachment of unwanted type ($found_extension)
demime = docx:pptx:xlsx
Putting this snippet in the acl_check_content section of your exim4.conf should do the trick.
Oh, and while I am at it, you can easily use this snippet to drop mails with other attachments, based on the file extension.
For example, in order to reject all mails containing WMV files just use demime = wmv.
Note that this snippets checks for a specified file extension instead of a MIME type. People still can get mails through in those formats if they modify the file extension, so do not use this method as a security measure.
2008-03-26
SFLC now also providing services to for-profit clients
The Software Freedom Law Center, known for providing pro bono legal assistance to Free Software projects, announced the formation of Moglen Ravicher LLC, a law firm also providing services to for-profit clients.
This not only means that companies are now able to get legal assistance on Free Software matters from the SFLC, but also that the center found a way of helping its own funding.
It also seems as if the first for-profit client is OpenNMS:
For more information see the homepage of the SFLC and the news entry announcing this step.
"We are pleased to extend the services of the Software Freedom Law Center to companies that support software freedom," said Eben Moglen, founding director of SFLC.
Moglen Ravicher LLC is fully owned by the Software Freedom Law Center, and all profits will go to support SFLC's operations. Clients of Moglen Ravicher LLC will receive legal counsel from the same attorneys that staff the Software Freedom Law Center.
This not only means that companies are now able to get legal assistance on Free Software matters from the SFLC, but also that the center found a way of helping its own funding.
It also seems as if the first for-profit client is OpenNMS:
An initial client of Moglen Ravicher LLC is OpenNMS, an open source enterprise grade network management platform. OpenNMS has retained the firm for representation regarding violations of the GNU General Public License (GPL).
For more information see the homepage of the SFLC and the news entry announcing this step.
Happy Document Freedom Day!
Just in case you do not know yet: today is Document Freedom Day.
What does this mean for mean personally? Less than one would expect. I have been advocating the use of Open Document formats (such as ODF) for the past two years already, and try to do so whenever possible.
People react very differntly when I raise this issue. Some appreciate being informed that there are Open Document formats, which guarantee interoperability with everyone, but others tend to tell me "everyone uses [Microsoft] Office, isn't that format a standard?". The answer is always the same: NO.
Neither the old proprietary Microsoft Office format, nor the new format, OOXML are standards in my opinion and here is why:
The old format is not documented at all, and no international standards body, such as the ISO, have ever made this format a standard.
The new format, OOXML, which is in the news quite often lately, is being pushed to be made an ISO standard. People often think that, as documentation (which is said to be of poor quality) is available, making this format an international standard would be a good thing.
I am afraid I have to say NO once again here. There are too many references to the old proprietary format, which is a huge no-go for something that should become an international standard.
Also, there already is an international standard for office documents, ODF. In my opinion there is no point in having two separate standards for the same thing and the chance of such a situation causing a lot of havoc is quite good.
So, personally I have to say that I quite often suggested people to switch to OpenOffice.Org lately, instead of buying Microsoft's latest Office suite. Document Freedom and the use of Free Software are not my main arguments lately, but rather that people switching to OpenOffice.Org now do not have to learn how to use a new user-interface. People are lazy, and this argument works perfectly.
And there is yet another point for using Open Standards in IT:
Think of the Internet and where it would be without Open Standards (and also Free Software). Think of how everything on the Internet would work together. Think of one browser supporting only its own network protocol (which of course would be proprietary) and other browsers only supporting theirs. The Internet would not be what it is today without Open Standards and guaranteed interoperability.
More information about the Document Freedom day can be found in the last news entry over at documentfreedom.org.
Today is Document Freedom Day: Roughly 200 teams from more than 60 countries worldwide are organising local activities to raise awareness for Document Freedom and Open Standards.
What does this mean for mean personally? Less than one would expect. I have been advocating the use of Open Document formats (such as ODF) for the past two years already, and try to do so whenever possible.
People react very differntly when I raise this issue. Some appreciate being informed that there are Open Document formats, which guarantee interoperability with everyone, but others tend to tell me "everyone uses [Microsoft] Office, isn't that format a standard?". The answer is always the same: NO.
Neither the old proprietary Microsoft Office format, nor the new format, OOXML are standards in my opinion and here is why:
The old format is not documented at all, and no international standards body, such as the ISO, have ever made this format a standard.
The new format, OOXML, which is in the news quite often lately, is being pushed to be made an ISO standard. People often think that, as documentation (which is said to be of poor quality) is available, making this format an international standard would be a good thing.
I am afraid I have to say NO once again here. There are too many references to the old proprietary format, which is a huge no-go for something that should become an international standard.
Also, there already is an international standard for office documents, ODF. In my opinion there is no point in having two separate standards for the same thing and the chance of such a situation causing a lot of havoc is quite good.
So, personally I have to say that I quite often suggested people to switch to OpenOffice.Org lately, instead of buying Microsoft's latest Office suite. Document Freedom and the use of Free Software are not my main arguments lately, but rather that people switching to OpenOffice.Org now do not have to learn how to use a new user-interface. People are lazy, and this argument works perfectly.
And there is yet another point for using Open Standards in IT:
Think of the Internet and where it would be without Open Standards (and also Free Software). Think of how everything on the Internet would work together. Think of one browser supporting only its own network protocol (which of course would be proprietary) and other browsers only supporting theirs. The Internet would not be what it is today without Open Standards and guaranteed interoperability.
More information about the Document Freedom day can be found in the last news entry over at documentfreedom.org.
Less spam again
I found a solution to the problem last described in this article.
To sum the problem I was experiencing up: My anti-spam system (namely Spamassassin) did not detect spam mails anymore.
Now here is the reason it did not: After some more investigation of the problem I noticed that spam emails were received via a local connection (forwarded from fetchmail). However, one of my Exim ACLs says not to scan emails from localhost for spam.
So, the solution might be a hack, but it worked out perfectly. Starting fetchmail with the -S <servername> argument causes it to send emails to the given SMTP server rather than localhost. Using the real hostname of my server caused the "do not scan local mails" not to kick in and all mails received via fetchmail to be scanned again.
Problem fixed.
To sum the problem I was experiencing up: My anti-spam system (namely Spamassassin) did not detect spam mails anymore.
Now here is the reason it did not: After some more investigation of the problem I noticed that spam emails were received via a local connection (forwarded from fetchmail). However, one of my Exim ACLs says not to scan emails from localhost for spam.
So, the solution might be a hack, but it worked out perfectly. Starting fetchmail with the -S <servername> argument causes it to send emails to the given SMTP server rather than localhost. Using the real hostname of my server caused the "do not scan local mails" not to kick in and all mails received via fetchmail to be scanned again.
Problem fixed.
2008-03-25
Moving my blog
And yet another post today. As I am planning to take down my personal server in the next few weeks (maybe months) I have moved my blog to wordpress.com. A 301-redirect has been set up at http://sp.or.at/blog so people (and robots) are still able to find my blog.
Mails from Technorati not arriving: not obeying their own SPF rules
As I was looking into problems with my mail server I noticed one more thing: I was wondering why I did not receive password recovery emails from Technorati. It seems as if they are not obeying their own SPF rules:
Now I am wondering why someone sets up SPF for his mail domain when he is in fact sending emails from other IP addresses as well. Time to update your SPF rules Technorati...
2008-03-25 14:46:23 H=nat-365m.technorati.com (t120.technorati.com) [208.66.64.4] F= rejected RCPT : Not authorized by SPF
Now I am wondering why someone sets up SPF for his mail domain when he is in fact sending emails from other IP addresses as well. Time to update your SPF rules Technorati...
Removing a lot of frozen mails from Exim’s mail queue
After writing my last article, I started digging into my mail configuration and after doing a quick "mailq" noticed a lot of frozen messages in Exim's queue. After inspecting the logs and the mails themselves I noticed the problem was caused by a broken POP server I retrieve mails from periodically. A few days ago something went wrong on that server and all messages were marked as unread causing my fetchmail to re-fetch all of them (about 2.5K).
Now that my mail server is configured to do sender verification and a few very old mails came from domains or systems which are non-existent today about 50 mails ended up being frozen.
But how to remove all frozen mails from Exim's queue? I ended up using mailq | grep frozen to get a list of all messages (and more importantly their message IDs) and saved that to a file. I then wrote a minimalistic Python script attached to this article to delete all those messages. Consider the script a quick and dirty hack, but it might come in handy for some of you. Get it here.
Now that my mail server is configured to do sender verification and a few very old mails came from domains or systems which are non-existent today about 50 mails ended up being frozen.
But how to remove all frozen mails from Exim's queue? I ended up using mailq | grep frozen to get a list of all messages (and more importantly their message IDs) and saved that to a file. I then wrote a minimalistic Python script attached to this article to delete all those messages. Consider the script a quick and dirty hack, but it might come in handy for some of you. Get it here.
More spam again?
Right now I am asking myself if it just affects me or if more spam is sent out and less is detected by anti-spam software again.
I set up my mail server in February and noticed a decrease in spam mail delivered to my mailbox compared to my old system. However, in the past two weeks more and more spam mail has been delivered to my mailbox again. So is it just me, my system or the system's configuration or is everyone else receiving more spam again?
Anyways, it's about time to inspect the configuration of my mail system again...
I set up my mail server in February and noticed a decrease in spam mail delivered to my mailbox compared to my old system. However, in the past two weeks more and more spam mail has been delivered to my mailbox again. So is it just me, my system or the system's configuration or is everyone else receiving more spam again?
Anyways, it's about time to inspect the configuration of my mail system again...
2008-03-19
Python IDEs tested
In the past two days I have been playing around with various Python IDEs. It is not like I need a fully-fledged IDE, I'm fine with GNU Emacs to be honest. However, everyone is talking about IDE X and IDE Y and how they save so much time using these programs and how these programs assist them with hacking.
Well, I decided it was time to give a few IDEs a try. There were only two requirements I had: the IDE has to be Free Software and it has to run on GNU/Linux.
If you are planning to read on please be aware that this was no real test, but rather contains my observations regarding the IDEs I have tested, what I liked and did not like and if one surprised me enough to actually use it instead of my good old plain GNU Emacs.
Well, I decided it was time to give a few IDEs a try. There were only two requirements I had: the IDE has to be Free Software and it has to run on GNU/Linux.
If you are planning to read on please be aware that this was no real test, but rather contains my observations regarding the IDEs I have tested, what I liked and did not like and if one surprised me enough to actually use it instead of my good old plain GNU Emacs.
2008-01-16
nwu development news #0
So, today I am starting off with a new story series. The nwu development news.
Now what is this series about? Well, to make a long story short, it is about what has recently changed in nwu's codebase and how nwu is coming along.
Just a sidenote, the first story in this series is of course number 0, as real programmers start counting at 0. :-)
For those of you who are now wondering what nwu is or could be, I did write about nwu on this weblog already and the 'nwu - an introduction' post should give you a good idea of what it is.
So, what has changed recently? Basically I merged my changes back into trunk, which means that most of these things are going to be used now. This means that the application framework, the scheduler, the APT "Packages" file parser, support for gzip compression in both the SecureXMLRPC client and server and the brand-new RPC framework are either already being used, or are going to be used soon.
Except for the RPC framework, which would need to be adapted, and the application framework, which depends on nwu.common.config, all these pieces of code also work stand-alone and can be used in other python applications too.
Now what is this series about? Well, to make a long story short, it is about what has recently changed in nwu's codebase and how nwu is coming along.
Just a sidenote, the first story in this series is of course number 0, as real programmers start counting at 0. :-)
For those of you who are now wondering what nwu is or could be, I did write about nwu on this weblog already and the 'nwu - an introduction' post should give you a good idea of what it is.
So, what has changed recently? Basically I merged my changes back into trunk, which means that most of these things are going to be used now. This means that the application framework, the scheduler, the APT "Packages" file parser, support for gzip compression in both the SecureXMLRPC client and server and the brand-new RPC framework are either already being used, or are going to be used soon.
Except for the RPC framework, which would need to be adapted, and the application framework, which depends on nwu.common.config, all these pieces of code also work stand-alone and can be used in other python applications too.
2008-01-13
Using parts of nwu in your project
As I promised I am writing about nwu again. But instead of reporting on recent development efforts I would rather like to point something else out today: The nwu.common Python module contains code which can be used stand-alone in your applications. Some of the functions the module provides could come in handy, so I thought it was a good idea to let you know.
This article is going to explain the stand-alone nwu.common.* modules and their function.
This article is going to explain the stand-alone nwu.common.* modules and their function.
Blue-GNU - News for Gnus
nwu - an introduction
This article should give you a brief overview of what network-wide updates, one of my projects, is about.
Network wide updates, or nwu, is a free software package licensed under the GPL (version 3 or later). It allows an administrator to remotely install software on and roll out security upgrades to managed computers. It is targeted at GNU/Linux systems using the Advanced Packaging Tool (APT) for package management and thus should run fine on all GNU/Linux distributions based on Debian GNU/Linux (such as gNewSense and all Ubuntu flavors).
Network wide updates, or nwu, is a free software package licensed under the GPL (version 3 or later). It allows an administrator to remotely install software on and roll out security upgrades to managed computers. It is targeted at GNU/Linux systems using the Advanced Packaging Tool (APT) for package management and thus should run fine on all GNU/Linux distributions based on Debian GNU/Linux (such as gNewSense and all Ubuntu flavors).
Subscribe to:
Posts (Atom)