If you're anything at all like me, then you were deeply saddened to learn that Microsoft has recently discontinued the licensing of Windows 3.1.
Hey now, cheer up! A little part of Windows 3.1 is still with us. You could even say that it will always be a part of us -- living on in our hearts and nostalgic memories of simpler times. You could also say that it lives in in the form of this dialog box that I found in Vista when trying to install a new font:
I've created this blog as an outlet to help release some of the rage that your software has caused me. Free software, open source, commercial -- I don't discriminate. Subscribe and keep up to date on all pain you've caused in both my personal life and my career in IT.
Wednesday, November 26, 2008
Friday, November 14, 2008
JAX-RPC Client in Netbeans 6.5RC2 vs Eclipse Ganymede
I'm not certain, but I think there's something goofy with the JAX-RPC client support in Netbeans 6.5 RC2 using the JAX-RPC Web Services plugin version 0.2.
I'm attempting to interface with a Liferay 5.1.2 instance so that I can add and modify users from an external application. Nothing too difficult. After installing the JAX-RPC plugin in Netbeans, I chose New->Web Service Client and pointed the dialog to the WSDL URL that Liferay provides. For example: http://10129:password@liferayserver:8080/tunnel-web/secure/axis/Portal_UserService?wsdl . Note that I'm passing a Liferay user-id and password and specifying /secure/ in the URI. This Liferay user-id happens to correspond to an administrator in my portal instance. While it is possible to get the WSDL without going through authentication (just omit /secure/ and don't pass credentials), I wanted to give Netbeans all the hints I could to make this easy on myself.
At this point Netbeans went about creating a massive number of files to consume the service. Seriously, a boatload. Once the client was created I tried testing the service using the functionality in Web Service References, but kept getting deserialization errors about an "invalid boolean value". No matter which function I tried to call I got the same error. Interestingly enough, by making a few intentional mistakes and watching the Liferay logs I could see the actual calls were making it to the server just fine, and I guess that makes sense since the errors are during deserialization of the returned data.
I tried sticking a client invokation on a JSP in my project by right-clicking in the open JSP file and choosing Web Service Client Resources->Call Web Service Operation. Specifically, I'm trying for getUserById(LiferayID) from which I should receive a Liferay User object, and from that I'm going to display the user's email address on my page.
No luck. Same damn deserialization error, this time reported in the Glassfish V2 output. It looks something like this:
On a hunch that the problem was with Netbeans' code generation, I decided to fire up Eclipse (Ganymede, full EE install) and use it to generate the appropriate classes. I created a new web project and then did a New->Other->Web Services->Web Service Client, turned the slider down to "assemble client", and pointed the service definition at the Liferay WSDL file. Note: I recommend copy / pasting the URL to the WSDL file into the service definition field in this dialog, because using Browse and typing it in is an experience much like falling off a moving truck into a giant pile of cheese graters. I won't go into it.
Eclipse generated the client for me which consisted of only six class files, unlike the 312 that Netbeans produced. It even used good package names. I believe it Eclipse in this case is just calling out to WSDL2Java from the Axis project, because I've done it by hand once or twice and got very similar results if memory serves me.
I manually imported the Eclipse-generated client into my Netbeans project, added some necessary jar files, hand-wrote the call in my jsp file and voila - IT WORKED.
The invocation code in the JSP file looked like this:
Note that setting username, password, and the endpoint is required.
The Netbeans code that didn't work looked like this:
Note that the calls look almost identical here, it's just that the client generated by Netbeans has deserialization issues for whatever reason.
Sorry about the bad code formatting, I don't know how to fix it with this crappy blogger editor.
I'm attempting to interface with a Liferay 5.1.2 instance so that I can add and modify users from an external application. Nothing too difficult. After installing the JAX-RPC plugin in Netbeans, I chose New->Web Service Client and pointed the dialog to the WSDL URL that Liferay provides. For example: http://10129:password@liferayserver:8080/tunnel-web/secure/axis/Portal_UserService?wsdl . Note that I'm passing a Liferay user-id and password and specifying /secure/ in the URI. This Liferay user-id happens to correspond to an administrator in my portal instance. While it is possible to get the WSDL without going through authentication (just omit /secure/ and don't pass credentials), I wanted to give Netbeans all the hints I could to make this easy on myself.
At this point Netbeans went about creating a massive number of files to consume the service. Seriously, a boatload. Once the client was created I tried testing the service using the functionality in Web Service References, but kept getting deserialization errors about an "invalid boolean value". No matter which function I tried to call I got the same error. Interestingly enough, by making a few intentional mistakes and watching the Liferay logs I could see the actual calls were making it to the server just fine, and I guess that makes sense since the errors are during deserialization of the returned data.
I tried sticking a client invokation on a JSP in my project by right-clicking in the open JSP file and choosing Web Service Client Resources->Call Web Service Operation. Specifically, I'm trying for getUserById(LiferayID) from which I should receive a Liferay User object, and from that I'm going to display the user's email address on my page.
No luck. Same damn deserialization error, this time reported in the Glassfish V2 output. It looks something like this:
java.rmi.RemoteException: Runtime exception; nested exception is:Awesome. I stepped through the code with the debugger and while I can see the exception occuring, I don't really understand why it's occuring. I did some searching online and couldn't find much except some complaining about Netbeans JAX-RPC support in general.
deserialization error: invalid boolean value:
at com.sun.xml.rpc.client.StreamingSender._handleRuntimeExceptionInSend(StreamingSender.java:348)
at com.sun.xml.rpc.client.StreamingSender._send(StreamingSender.java:330)
at com.sgmbiotech.liferay.client.UserServiceSoap_Stub.getUserById(UserServiceSoap_Stub.java:1432)
...
Caused by: deserialization error: invalid boolean value:
at com.sun.xml.rpc.encoding.SOAPDeserializationContext.deserializeMultiRefObjects(SOAPDeserializationContext.java:107)
at com.sun.xml.rpc.client.StreamingSender._send(StreamingSender.java:256)
... 35 more
On a hunch that the problem was with Netbeans' code generation, I decided to fire up Eclipse (Ganymede, full EE install) and use it to generate the appropriate classes. I created a new web project and then did a New->Other->Web Services->Web Service Client, turned the slider down to "assemble client", and pointed the service definition at the Liferay WSDL file. Note: I recommend copy / pasting the URL to the WSDL file into the service definition field in this dialog, because using Browse and typing it in is an experience much like falling off a moving truck into a giant pile of cheese graters. I won't go into it.
Eclipse generated the client for me which consisted of only six class files, unlike the 312 that Netbeans produced. It even used good package names. I believe it Eclipse in this case is just calling out to WSDL2Java from the Axis project, because I've done it by hand once or twice and got very similar results if memory serves me.
I manually imported the Eclipse-generated client into my Netbeans project, added some necessary jar files, hand-wrote the call in my jsp file and voila - IT WORKED.
The invocation code in the JSP file looked like this:
com.liferay.portal.service.http.UserServiceSoapProxy proxy = new com.liferay.portal.service.http.UserServiceSoapProxy();
((javax.xml.rpc.Stub)proxy.getUserServiceSoap())._setProperty(javax.xml.rpc.Stub.USERNAME_PROPERTY, "10129");
((javax.xml.rpc.Stub)proxy.getUserServiceSoap())._setProperty(javax.xml.rpc.Stub.PASSWORD_PROPERTY, "test");
((javax.xml.rpc.Stub)proxy.getUserServiceSoap())._setProperty(javax.xml.rpc.Stub.ENDPOINT_ADDRESS_PROPERTY, "http://liferayserver:8080/tunnel-web/secure/axis/Portal_UserService");
com.liferay.portal.model.UserSoap user = proxy.getUserById(10129);
response.getWriter().write(user.getEmailAddress());
Note that setting username, password, and the endpoint is required.
The Netbeans code that didn't work looked like this:
try {
client.UserServiceSoapService userServiceSoapService = new client.UserServiceSoapService_Impl();
client.UserServiceSoap portal_UserService = userServiceSoapService.getPortal_UserService();
((javax.xml.rpc.Stub)portal_UserService)._setProperty(javax.xml.rpc.Stub.USERNAME_PROPERTY, "10129");
((javax.xml.rpc.Stub)portal_UserService)._setProperty(javax.xml.rpc.Stub.PASSWORD_PROPERTY, "test");
((javax.xml.rpc.Stub)portal_UserService)._setProperty(javax.xml.rpc.Stub.ENDPOINT_ADDRESS_PROPERTY, "http://liferayserver:8080/tunnel-web/secure/axis/Portal_UserService");
response.getWriter().write(user.getEmailAddress());
} catch(javax.xml.rpc.ServiceException ex) {
java.util.logging.Logger.getLogger(client.UserServiceSoapService.class.getName()).log(java.util.logging.Level.SEVERE, null, ex);
} catch(java.rmi.RemoteException ex) {
java.util.logging.Logger.getLogger(client.UserServiceSoapService.class.getName()).log(java.util.logging.Level.SEVERE, null, ex);
} catch(Exception ex) {
java.util.logging.Logger.getLogger(client.UserServiceSoapService.class.getName()).log(java.util.logging.Level.SEVERE, null, ex);
}
Note that the calls look almost identical here, it's just that the client generated by Netbeans has deserialization issues for whatever reason.
Sorry about the bad code formatting, I don't know how to fix it with this crappy blogger editor.
Sunday, October 26, 2008
Eclipse Ganymede WTP Can Bite Me
****
I am an idiot.
The J2EE module dependencies of my project were reset when I did the renaming business. It took me an embarrassingly long time to figure this out, but I wasn't exactly helped much by Eclipse's awesome "Preferences" pages.
****
Approximately seven hours down the tubes tonight fighting with Eclipse WTP (Web Tools Platform). Everything was working fine earlier, but then I used Eclipse's own refactoring tools to rename a few projects and the whole thing went belly up. For some reason it's no longer publishing my projects properly to tomcat and I'm getting persistent ClassNotFound exceptions for classes that are not in the project that I'm actually publishing.
Build environment seems ok, all of my unit tests run fine, including tests of the Dynamic Web Project in question which references classes another Utility project, which are ultimately not found... everything works except for the publishing & running part. I can successfully create a new test client via Web Services and the client JSP pages load and fire, but when I hit the Invoke button: blammo, ClassNotFoundException: com.you.are.an.idiot.Haha not found.
I went so far as to completely wipe Eclipse and reinstall. I then re-created my workspace and projects and imported the source. I even ditched Tomcat and let Eclipse install a new one for me. Nothing. It's all hosed and I'm mad as hell.
Welp, I get to look like an ass in the morning because my working code isn't demonstratable. Sure wish I knew what happened, but don't really care. I'm giving JDeveloper a download. Sorry Eclipse, I appreciate the vision and hard work, but most of the stuff I've tried outside of the core featureset has been way too fragile.
Eclipse... *snif* ... what really chaps my ass is that this crap was working this afternoon and I was just starting to dig in and refine the interface for this project. If only I hadn't decided to friggin RENAME some projects... By now I would likely be finished.... but here I am, without a clue how to proceed without launching in to a completely different environment.
If anybody knows of a secret "clean up everything related to Eclipse and every damn piece of configuration it's ever puked up and start over from scratch" button, I'm all ears. I'm pretty sure I covered everything when I started over, but I must've missed something.
I am an idiot.
The J2EE module dependencies of my project were reset when I did the renaming business. It took me an embarrassingly long time to figure this out, but I wasn't exactly helped much by Eclipse's awesome "Preferences" pages.
****
Approximately seven hours down the tubes tonight fighting with Eclipse WTP (Web Tools Platform). Everything was working fine earlier, but then I used Eclipse's own refactoring tools to rename a few projects and the whole thing went belly up. For some reason it's no longer publishing my projects properly to tomcat and I'm getting persistent ClassNotFound exceptions for classes that are not in the project that I'm actually publishing.
Build environment seems ok, all of my unit tests run fine, including tests of the Dynamic Web Project in question which references classes another Utility project, which are ultimately not found... everything works except for the publishing & running part. I can successfully create a new test client via Web Services and the client JSP pages load and fire, but when I hit the Invoke button: blammo, ClassNotFoundException: com.you.are.an.idiot.Haha not found.
I went so far as to completely wipe Eclipse and reinstall. I then re-created my workspace and projects and imported the source. I even ditched Tomcat and let Eclipse install a new one for me. Nothing. It's all hosed and I'm mad as hell.
Welp, I get to look like an ass in the morning because my working code isn't demonstratable. Sure wish I knew what happened, but don't really care. I'm giving JDeveloper a download. Sorry Eclipse, I appreciate the vision and hard work, but most of the stuff I've tried outside of the core featureset has been way too fragile.
Eclipse... *snif* ... what really chaps my ass is that this crap was working this afternoon and I was just starting to dig in and refine the interface for this project. If only I hadn't decided to friggin RENAME some projects... By now I would likely be finished.... but here I am, without a clue how to proceed without launching in to a completely different environment.
If anybody knows of a secret "clean up everything related to Eclipse and every damn piece of configuration it's ever puked up and start over from scratch" button, I'm all ears. I'm pretty sure I covered everything when I started over, but I must've missed something.
Monday, October 20, 2008
Dear AJAX, Stop That
I want my browser's STOP button back!
Or maybe I should be directing this complaint at my browser? Hey Firefox, why can't the STOP button stop AJAX requests?
Or maybe I should just stop using Google Reader for my homepage. Hey Google, enough already with the "our web page is really a web application" business. I'm damn sick of having to make sure that the Google Reader page loads up before trying a search in Firefox. If I don't wait then Reader hijacks the browser when it finally gets its results back from AJAXville, which can take up to half a minute at times.
Or maybe I should be directing this complaint at my browser? Hey Firefox, why can't the STOP button stop AJAX requests?
Or maybe I should just stop using Google Reader for my homepage. Hey Google, enough already with the "our web page is really a web application" business. I'm damn sick of having to make sure that the Google Reader page loads up before trying a search in Firefox. If I don't wait then Reader hijacks the browser when it finally gets its results back from AJAXville, which can take up to half a minute at times.
Tuesday, September 30, 2008
Is Auto-Hiding THAT Hard?
For Blod's sake, why can't anyone make "auto-hide" work on task bars. It hasn't worked well on any version of Windows from '95 through Vista. The Downloads window in Firefox or IM notifications in MSN Messenger, as for-instances, will not allow the task bar to auto-hide properly - it'll just sit there, unhidden, waiting for you to figure out which window is trying to notify you of something or other that you don't care about.
I happen to like the task bar to be on the top of the screen, so when it doesn't auto-hide, I lose the tops of any windows up there which usually means I lose the ability to easily move them around and such. This happens to me every single day in Vista. Right now it's up there not auto-hiding and I have no idea which window is causing the problem, and whether it's just a matter of giving the offending window focus, or completely closing it to regain auto-hide functionality. I've got 3 SSH sessions via Putty, one instance of Word, three file explorer windows, one remote terminal, two Windows shells, one instance of FreeCommander, one instance of Notepad, one instance of Eclipse, an instance of Sage MAS90, 14 Firefox windows, and Outlook all running and I'll be damned if I'm going to start closing them all until the stupid Task Bar starts working right again.
This happens to me with Gnome panels also, but thankfully they do such a terrible job of "hiding" that I rarely use that feature when I'm running a Gnome desktop.
I happen to like the task bar to be on the top of the screen, so when it doesn't auto-hide, I lose the tops of any windows up there which usually means I lose the ability to easily move them around and such. This happens to me every single day in Vista. Right now it's up there not auto-hiding and I have no idea which window is causing the problem, and whether it's just a matter of giving the offending window focus, or completely closing it to regain auto-hide functionality. I've got 3 SSH sessions via Putty, one instance of Word, three file explorer windows, one remote terminal, two Windows shells, one instance of FreeCommander, one instance of Notepad, one instance of Eclipse, an instance of Sage MAS90, 14 Firefox windows, and Outlook all running and I'll be damned if I'm going to start closing them all until the stupid Task Bar starts working right again.
This happens to me with Gnome panels also, but thankfully they do such a terrible job of "hiding" that I rarely use that feature when I'm running a Gnome desktop.
Friday, September 26, 2008
Top 10 Linux FOSS Keys
Linux and FOSS "top whatever" lists are all the rage these days on the blogoweb, so I figured I might as well cash in.
I've been using Linux and Free Open Source Software for many years now, and have developed my own set of preferences and tastes in this regard that work well for me. I thought I'd share some of them in hopes of helping out newbies or perhaps even inspiring some of the old-timers.
Specifically, I'd like to talk about some of the keys available on Linux. I realize that many of these keys might also be available on other operating systems, and I should also note that I haven't used every single key -- so if I miss an important one, let me know.
I've been using Linux and Free Open Source Software for many years now, and have developed my own set of preferences and tastes in this regard that work well for me. I thought I'd share some of them in hopes of helping out newbies or perhaps even inspiring some of the old-timers.
Specifically, I'd like to talk about some of the keys available on Linux. I realize that many of these keys might also be available on other operating systems, and I should also note that I haven't used every single key -- so if I miss an important one, let me know.
- The A key.
Not the most important key, surely, but a big dog no less. The king of the alphabet. The first letter of "Alphanumeric" and "A-Team". When used in conjunction with the CTRL key it will Select All in most of your favorite FOSS programs. Without the A key, there's no way you could view processes belonging to All users using `ps aux`, or see All sockets using `netstat -a` -- hell, you couldn't even spell netstat.
The A key. With it we can achieve the center, the essence, the heart of `man`. - The Semicolon (;) key.
Ah the semicolon, a personal favorite of mine. With the semicolon key I can look grammatically smart to people more stupider than I; of whom are many. Without the semicolon key getting a C or Java program to compile would be a miserable experience, and you can forget about your cute little PHP app with the Javascript front-end.
The semicolon key -- keeping FOSS lovers' right pinkies in shape for over 30 years. Or more, or less. I dunno, how old are keys? - The Backtick key.
Oh! the anguished, misunderstood, misrepresented backtick key. Ask a typical Windows user to press the backtick key and you'll have a good reason to chuckle arrogantly to yourself, just loudly enough so that they'll see it. "Why would I ever use that", they'll ask, full of stupid.
Often overshadowed by the tilde who found fame in mathematics, the backtick waits patiently on your FOSS keyboard, just above the Tab and just to the left of 1 -- waiting for you to need to exec something in your scripting language. Or waiting, perhaps, to be used covertly in a parameter sent to your poorly-written PHP script... but let's not generalize, not all backticks are bad just because most of them are.
Oh backtick, you kick so much ass! - The Backspace key.
The cleaner. The remover of bad. Where would be without the backspace key? We'd be arrowing around and using the delete key, that's where we'd be. What a nightmare! The delete key is one of the most overrated, poorly conceived keys on your FOSS keyboard, yet somehow it has historically taken precedence over backspace, who in some circumstances is reduced to coughing out a bunch of useless ^H^H^H characters.
Without the backspace key I wouldn't be able to remove the mistake I'm about to make. - The Spacebar.
Perhaps the spacebar should have been at the #1 spot on my list. After all, it's freakin GIGANTIC! It's so god damned enormous that it's not even called a key -- it's called a BAR. And it's a bar because it's so dang useful. Here's how your code might look without the spacebar:
intmain(){return1;}
Compile THAT! Ha!
And what a perfect name, rich with double meaning. When future earthly entrepreneurs start opening merry little establishments on space stations around the universe, you can bet there will be more than a small shake of cool joints called "The Space Bar." - The ellameno keys.
Reduced to a single letter in the minds of many by an unfortunate, cruel children's song, the l, m, n, and o keys deserve a mention. Thanks for the ls, the more, the nslookup, and the almost-middle letter of the acronym FOSS. - The Print Screen key.
A dumping ground for homeless functions, the Print Screen key has remained a part of FOSS-compatible keyboards since its early beginnings when it would actually "print the screen." Now nobody really knows what to make of it. It might take a screenshot, sure, that's cool. On some non-FOSS keyboards it might INSERT, causing your cursor to type over everything in front of it, and I guess that's manageable. But what if it SysRqs???? What the hell is that? What if it SysRqs whiles taking a screenshot of your cursor changing behavior!?
Typically located near the Scroll Lock and Pause / Break keys, the Print Screen key is made even more ominous by the unsavory company it keeps. They're there, they're proud, and it's best if you just leave them the hell alone. But if you get yourself in with this selective crowd, they might just open up a whole new level of functionality that you never knew existed. But I dunno, I don't push on them. - The Esc key.
Let's face it, the escape key is losing relevance. Rarely does the esc key actually perform any useful function -- just what does it mean to "escape" in a modern operating system anyhow? Richard Stallman completely redefined the key when he wrote Emacs, turning into some kind of freakishly meta bastard, and nobody even noticed! Nobody stopped to think, "hey, if I press M-c, an awesome feature to capitalize the first letter of a word, won't it cause me to 'escape' from Emacs?" No, no it won't.
Look, the esc key might not have any functional relevance when it comes to your FOSS software, but it does have one very important function in the physical world. It gives you something to beat on when your FOSS programs that aren't supposed to crash suddenly stop responding. As if it were the controls to a time machine, we beat the hell out of the escape key every time something goes wrong. It is the avenue through which we allow our abusive tendencies to escape.
Escape key, I salute thee. - The Arrow keys.
"Hey, we're hear to stay, so just use us already!!"
Poor arrow keys. Everyone always hijacking other letters or even the number pad to perform the function that the arrow keys were specifically designed for. Twenty years ago it wasn't a given that a keyboard would have arrow keys, but now-a-days I challenge you to find one without. So knock it off with this Num Lock crap and forget about your WASD and HJKL. We're never going to convince our proprietary brothers and sisters to make the switch to FOSS if we can't even reduce ourselves to using keys with arrows on them to move things in directions.
Arrow keys. Easy is good. - The Windows key SUCKS.
In blog-style top-ten FOSS fashion, I completely bail when I run out of ideas and add something completely irrelevant. Open-Apple and Closed-Apple are stupid too.
Thursday, September 18, 2008
Viewsonic WPG-150 "Wireless Presentation Gateway"
"Hey, I thought this blog was about software, what gives?"
"huh?"
"Nevermind. Thanks."
Well, Mr. Anonymous Strawman Setup Guy, it is. This post is about the Viewsonic WPG-150 Wireless Presentation Gateway -- a little box that allows a person to connect wirelessly to any VGA projector. I'm not going to focus on the WPG-15o's hardware, as it seems to work just fine; its firmware on the other hand...
The way the WPG-150 works is this: it broadcasts its SSID over wireless which you connect to via your Windows-based PC or laptop. Once connected, you simply open a web browser and you'll be instructed to download the software required to actually send video to the device, which it then sends out via VGA to the projector. The client software consists of video and audio capture drivers necessary to send your display to the WPG-150, and a simple control-panel type application.
All that crap works just fine. No issues whatsoever. The problem is this: "what if I want to connect to the network while I'm making my presentation? What if my presentation involves showing a live website?" Hey, have no fear, the WPG-150 solves this problem for you as well by including a network jack allowing you to plug the device into your network. It's then smart enough to handle your network requests while still projecting your display.
Here comes the FAIL!
The wireless adapter inside the WPG-150 is hard coded with an IP address of 10.0.0.1, and the device has a built-in DHCP server that will assign the computer that connects to it an address of 10.0.0.2. Interesting choice of IP addresses there, since 10.0.0.0/24 is a common subnet used in private corporate networks, and 10.0.0.1 and 10.0.0.2 are very likely to be used by gateways or servers. Can you guess my network configuration? Yup.
The WPG-150 only allows you to change the LAN IP, so using it on a 10.0.0.x network would require the additional purchase of a router. The website BCCHardware.com has a review of the WPG-150 in which they seem to imply that the 10.0.0.1 address can be modified via the default web page that the device serves up when you connect to the device without the client software. I was able to locate an embedded Java control on this page, but it errored out with a class not found exception. I even rolled back to Java 1.4 and used several different browsers. My hunch is that this is a left-over from a previous incarnation of the device.
So I emailed Viewsonic technical support via their online support form. No response for over a week.
So I broke down and called them up. You can get their tech-support phone number after going through a little online support questionaire in which you click "no this did not solve my problem" about fifteen times. Thankfully I was able to speak with a representative within a few minutes of making my call. I made the mistake of trying to explain why the device wasn't working on my network, which just seemed to confuse the matter. Finally I just flat out asked, "just tell me this: how do you change the 10.0.0.1 address on the wireless interface?"
"You can't. It's hardcoded into the firmware."
"Is there an older version of the firmware I can roll back to or something?"
"No."
"Fail!"
"huh?"
"Nevermind. Thanks."
I called up my favorite sales rep and setup a return for the device, and at the same time put in an order for the InFocus LiteShow II, another Wireless Presentation Gatway that's a few dollars more expensive.
When the LiteShow II arrived I felt a little pang of panic as the device looks almost identical to the WPG-150. It even comes configured with the wireless interface set to 10.0.0.1!! Clearly there is an outfit out there private labeling these things. The firmware, to my relief, is not a copy of the WPG-150 and has several networking options available. I was able to configure the device in Access Point+ mode (or something) which basically turns the thing into a router. The LiteShow documentation sucks, so it took a while to figure this out, but essentially I was able to give the hard wired LAN interface an IP on my private network, and the wireless interface an address in a different range and it routes between the two successfully. WPA (TKIP or AES) encryption and RADIUS support on the wireless interface makes it a useable access point in a corporate setting.
Wednesday, September 17, 2008
Linux Action Show's Host to Produce Non-free Software
So I'm a regular listener of the Linux Action Show, a podcast devoted to news and commentary about -- you guessed it -- Linux. The "action" part of the show is that they talk loudly. For the most part I've always found the show to be interesting and insightful without being overly preachy, as Linux advocasy often is. Essentially: Linux is good but needs work and we can put up with a few proprietary drivers even though we hate them. Knowing of the show's pragmatic stance on free software, it still came as quite a shock when co-host Bryan Landuke recently announced that he is going to be producing commercial, closed-source applications exclusively for Linux.
Producing closed-source commercial programs on Linux to demonstrate that it can be done successfully in the hopes that others will follow suit is fatally flawed. As we just implied, Linux doesn't have the market share to support a large number of niche closed-source developers, so all that will be proven is that it can't be done. But just for fun, let's assume that Bryan does pull it off, and he sells enough copies of ones and zeros to pull in the necessary income to replace a regular W2 job in the industry (assuming this number is in the range of $60-100K for a man in his 30s, plus enough to cover the self-employment tax burden... that's a lot of shareware to sell). What will this do for Linux?
By no means do I subscribe to the FOSS evangelist "philosophy" perpetrated by Richard Stallman. In fact, I rather enjoy poking fun at it. But there is one area where I do believe that FOSS values are not only appropriate, but necessary: free operating systems. Linux, of course, being one of them.
The hosts of the Linux Action Show have long held the belief that in order for Linux to become "mainstream", it has to look right. I agree with this wholeheartedly. If a Linux distribution doesn't look as professional and asthetically pleasing as its commercial competition then the mainstream consumer is not going choose it over a proprietary system. Another idea often emphisised by the Linux Action Show is that Linux is lacking high quality consumer / end user applications. This I also tend to agree with. Aside from a few shining examples like OpenOffice.org, many Linux applications are very lacking in ease of use, documentation, and overall visual appeal. Great, so these guys are on the same page as me and they're all ramped up to actually do something about it!
The solution? Make more proprietary, non-free, closed-source applications for Linux that look really nice.
ZZZZzzzttt! Hold it right there. This makes no sense. I actually had to rewind the podcast a little bit to make sure I was hearing things right. I was. Unfortunately.
After hearing Bryan's entire spiel about the two applications he'll be launching (some kind of e-readers...I don't really do much e-reading) I quit listening and almost decided to go unsubscribe from the podcast. Then I realized that unsubscribing would involve work, and that unsubscribing from a podcast to convey my righteous dismay was about as lame as starting an internet petition to save the world from global warming, so I decided to remain a listener. (not to mention it's like the only quality Linux show left now that LUG Radio called it quits)
The problems with Bryan's assertion that Linux needs more commercial applications are many. Let's think about this.
It is true that the state of desktop computing today has a lot to do with commercial vendors licensing closed applications as a business model. A paid software engineer will produce more code than a non-paid one, and the money has to be made somehow, right?
Well, yeah, but that's besides the point. Windows did not become the market force it is today because a whole bunch of developers decided that they would make money by licensing software on Windows. This egg comes before the chicken. Developers chose Windows because Microsoft made it into the gorilla it has always been by sheer business force. Windows is where the customers are, and that's why it's where the developers are.
Producing closed-source commercial programs on Linux to demonstrate that it can be done successfully in the hopes that others will follow suit is fatally flawed. As we just implied, Linux doesn't have the market share to support a large number of niche closed-source developers, so all that will be proven is that it can't be done. But just for fun, let's assume that Bryan does pull it off, and he sells enough copies of ones and zeros to pull in the necessary income to replace a regular W2 job in the industry (assuming this number is in the range of $60-100K for a man in his 30s, plus enough to cover the self-employment tax burden... that's a lot of shareware to sell). What will this do for Linux?
It will do absolutely nothing for Linux except, perhaps, gain it a few properietary niche applications. Will anyone switch to Linux because they can buy a comic book reader for it? I think not. Even if Adobe released its entire Creative Suite on Linux it still wouldn't make people switch.
OOOh, I can almost hear your grunts and groans! But I'm right. The person who would switch from Windows to Linux because Creative Suite ran on Linux is somebody who is ALREADY RUNNING LINUX. That person just also happens to be running Windows or OSX too.
To be frank, Linux is not as polished or easy to use as Windows or Mac OSX. The devil is in the details, and while some Linux distributions like Ubuntu work great overall, it's the little things that hurt. Why don't those extra buttons on my laptop do anything? Why isn't my wireless card working? Why is it so hard to use a projector? Why the hell are all the fonts so damn big? Which of my smart-ass younger relatives can I call when I suddenly can't change my desktop resolution?
No, Linux's value is not in its polish, that's Mac OSX. Linux's value is not in its giant install base, that's Windows. Linux's value is that it's free. Free as in beer and free as in libre. I can install Linux almost anywhere, for any reason, without having to worry about licenses or activation schemes. I can use my Linux applications freely knowing that in general they support open standards, have long lifespans, aren't tied to a specific machine, and don't force me to upgrade every six months. That is the value of Linux, which is the value of FOSS itself.
Nobody is going to make the switch to Linux because they can buy more commercial software for it. They're already getting their properietary operating systems for "free" with the machines they buy anyhow, and they have no trouble buying commercial software for them. Adding commercial software to Linux won't fulfill any real need.
And what happens if Bryan's plan does work, and work better than he ever dreamed? What if we had scores of closed-source applications on Linux making Linux more comparable to Windows or OSX? What would we have gained? Better DRM support? Less control over our data? Applications that phone home to activate? Fun and exciting commercial EULAs? Wait, why were we using Linux in the first place?
Here's a suggestion, Bryan: create awesome looking, feature-rich applications for Linux and give them away, source and all. Pick a modern business model that doesn't involve artificial restrictions and show us how you can make money at it. Show us that open-source software doesn't have to suck because we aren't paying for a license. This would impress me. This would be good for Linux.
Until then, I'll be self-righteously thinking about unsubscribing from your show every time you mention your plan, but then not actually unsubscribing. (It's my protest and I can conduct it on any scale I choose, thank you very much.)
Wednesday, September 10, 2008
Damn you nano!
GNU nano is a pico clone text editor for Unix-like operating systems, and I like it because it's quick and reminds me of using Pine for email in the good-ol' days when an inbox was full of messages from actual human beings. Anyhow, that's besides the point. Let me just say that I use nano frequently throughout the day.
The problem with nano is that the "search" or "find" feature is bound to ^W (as in "Where is", sheesh), which also happens to be the same key combination to close the active window in Microsoft Windoze (as in "close Window", double sheesh).
Every single day I'm in a web browser or some other application on Windows and I'll hit ^W on accident when trying to search for some text. *blip* there goes the window.
I thought I'd blog about it because you look bored.
pfSense Captive Portal with Firewall Schedules
Today's sad story:
So I'm looking for captive portal setup for my company's guest internet access and initially came across Monowall which seemed to fit the bill. (I think you're supposed to spell it mOnOwall, but I'm not 16 years old.) While Monowall was a minor pain in the butt to install to the hard drive of the old machine that will be acting as my router for this project, it did work and worked well. However, I found it a little bit lacking in both the features department and the 'let's develop new features' department.
While browsing various forums for information about Monowall, I found a couple nods to pfSense, which turns out to be a fork of Monowall with many more features and a snappy interface. Not only that, but pfSense is distributed as an ISO with no-nonsense installer that can actually install the software to a hard drive without the extra steps required by Monowall in this regard. Yay.
So I was really only interested in two features. I'm not too particular. (in fact, it seems like most of my rants involve the need for two related features that no one piece of software can manage to provide) The features are:
1) A captive portal.
2) Scheduled firewall rules.
pfSense has both, but of course, as I'm sure you guessed, that's right, it wouldn't be my life if this wasn't the case, THEY DON'T WORK TOGETHER.
Enable a scheduled for your LAN->any firewall rule and your captive portal will no longer function. In fact, I had to delete the schedule I created and completely restart pfSense before the captive portal came back online.
Why would these two features be mutually exclusive? What if you want to provide guest access via Captive Portal for normal business hours, and no access during non-business hours? I don't think that this is a terribly odd request. If you're providing Wifi access you certainly don't want to worry about some jackass out in the parking lot in the middle of the night trying to hack on your portal. Access to any portion of a network should be off when you know, for sure, that it does not need to be in service. Sure I could enable encryption on the access point, but in this case I'm looking for ease of use over security -- but not completely. After all, I have some control over my parking lot during business hours.
What bothers me most is that a Captive Portal should be on conceptually on top of the firewall, and a firewall schedule should just fire rules at defined times. If you defined a rule that completely disabled access through the firewall at 5:00PM, yes your Captive Portal would stop working -- but so what, that's sort of the point.
Friday, September 5, 2008
3Com Baseline 29XX Switch Password Lameness
Change the admin password on your 3Com Baseline Switch? Can't login anymore?
Here's what I figured out through a bit of guesswork: passwords must be 8 characters or less. The web-based control panel will allow you to enter passwords longer than 8 characters, and will simply truncate your password without warning.
So if you entered as a password: IAmADickensFan
You can login using: IAmADick
Thursday, September 4, 2008
Ubuntu Hardy Screen Resolution Hell
Ubuntu Thinktank: "How can we make Ubuntu easier to use?"
Satan: "Well, you could completely automate screen detection so people don't have to manually configure xorg.conf. In fact, you should just stop using xorg.conf. You might also get rid of the Screens and Graphics tool replace it with a single dialog box that allows the user to view his screen resolution and pretty much not do anything else. After all, an automated solution that works for all is the best solution."
Ubuntu Thinktank: "Interesting... but should we really believe that our one solution will work for every hardware scenario? I mean, easy is great but what about the people for whom it doesn't work right, won't they be..."
Satan: "SILENCE FOOLS! ONE WAY FOR ALL IS THE WAY AND IT IS MY WAY FOR I HAVE SPOKEN! (Just make sure it works on my Compaq laptop or there'll be HELL to pay. Hehe, I kill me.)"
-----
So this morning I boot my Ubuntu Hardy machine and behold: 640x480 awesomeness. No real reason for it that I can think of. It's kind of odd because I have a similar problem with my Vista machine "forgetting" it's screen settings, but at least in that case I can CHANGE THEM BACK.
Seems like this is a common problem. Here are a couple reference threads: http://ubuntuforums.org/showthread.php?t=766879 , and http://ubuntuforums.org/showthread.php?t=771810 .
I have a Geforce 6200 video card in the machine and a pretty basic 1024x768 LCD. Not an abnormal configuration by any means. I'm using the proprietary nVidia driver.
I installed nvidia-settings and ran it, but guess what? It's not usable at 640x480. How about that! To all you designers at Gnome who thought: "hmm, is there any way that we can make buttons and stuff like that bigger?", go to hell.
I manually edited xorg.conf to add a line forcing the display to 1024x768, and the display came up at 800x600. That was pretty awesome.
Finally I read a post about adding "Screens and Graphics" back to the main menu. Apparently they didn't get rid of it, they just hid it because the devil made them do it. Running Screens and Graphics I was able to change my monitor type and get back to 1024x768. (Note that if you add Screens and Graphics, it will show up under "Other", and if you run the menu editor at low resolution you'll have to use the tab and arrow keys to move around because the mouse will just scan around the virtual screen instead of clicking on what you want to be clicking on)
Oh, I should note that you can edit your menu by right clicking on it and choosing Edit Menus. I believe it should be the Applications menu that you right click on, but I can't remember. My menu is just a little Ubuntu icon because I couldn't stand all the space the default menu took on the panel.
As I wrap this up, I'd like to offer a bit of advice to the Ubuntu Thinktank:
There are two things that I need more than anything out of an easy-to-use graphical operating system:
1) The ability to enter stuff into the computer with the keyboard and mouse.
2) The ability to see the stuff I entered.
Any sort of screwups in those regards are showstoppers. Don't take away the user's ability to easily make changes to his configuration. You want to automate things? Cool. Do both.
Satan: "Well, you could completely automate screen detection so people don't have to manually configure xorg.conf. In fact, you should just stop using xorg.conf. You might also get rid of the Screens and Graphics tool replace it with a single dialog box that allows the user to view his screen resolution and pretty much not do anything else. After all, an automated solution that works for all is the best solution."
Ubuntu Thinktank: "Interesting... but should we really believe that our one solution will work for every hardware scenario? I mean, easy is great but what about the people for whom it doesn't work right, won't they be..."
Satan: "SILENCE FOOLS! ONE WAY FOR ALL IS THE WAY AND IT IS MY WAY FOR I HAVE SPOKEN! (Just make sure it works on my Compaq laptop or there'll be HELL to pay. Hehe, I kill me.)"
-----
So this morning I boot my Ubuntu Hardy machine and behold: 640x480 awesomeness. No real reason for it that I can think of. It's kind of odd because I have a similar problem with my Vista machine "forgetting" it's screen settings, but at least in that case I can CHANGE THEM BACK.
Seems like this is a common problem. Here are a couple reference threads: http://ubuntuforums.org/showthread.php?t=766879 , and http://ubuntuforums.org/showthread.php?t=771810 .
I have a Geforce 6200 video card in the machine and a pretty basic 1024x768 LCD. Not an abnormal configuration by any means. I'm using the proprietary nVidia driver.
I installed nvidia-settings and ran it, but guess what? It's not usable at 640x480. How about that! To all you designers at Gnome who thought: "hmm, is there any way that we can make buttons and stuff like that bigger?", go to hell.
I manually edited xorg.conf to add a line forcing the display to 1024x768, and the display came up at 800x600. That was pretty awesome.
Finally I read a post about adding "Screens and Graphics" back to the main menu. Apparently they didn't get rid of it, they just hid it because the devil made them do it. Running Screens and Graphics I was able to change my monitor type and get back to 1024x768. (Note that if you add Screens and Graphics, it will show up under "Other", and if you run the menu editor at low resolution you'll have to use the tab and arrow keys to move around because the mouse will just scan around the virtual screen instead of clicking on what you want to be clicking on)
Oh, I should note that you can edit your menu by right clicking on it and choosing Edit Menus. I believe it should be the Applications menu that you right click on, but I can't remember. My menu is just a little Ubuntu icon because I couldn't stand all the space the default menu took on the panel.
As I wrap this up, I'd like to offer a bit of advice to the Ubuntu Thinktank:
There are two things that I need more than anything out of an easy-to-use graphical operating system:
1) The ability to enter stuff into the computer with the keyboard and mouse.
2) The ability to see the stuff I entered.
Any sort of screwups in those regards are showstoppers. Don't take away the user's ability to easily make changes to his configuration. You want to automate things? Cool. Do both.
Friday, August 29, 2008
Safari Books Online by O'Reilly
Thankfully, this post has nothing to do with me crying over your software.
I recently signed up for O'Reilly's Safari Books Online which is an online book service providing complete titles from O'Reilly and several other technology book publishers. O'Reilly, if you didn't already know, is one of the leading publishers of technology books. You've seen their books in the Computer section of your local mega-chain bookstore -- they're the ones with the pictures of ducks and chickens and badgers and snakes on them.
Safari Books Online contains, as far as I can tell, the complete lineup of O'Reilly books and titles from other publishers such as Microsoft Press, Que, Manning (think pictures of swashbucklers and jesters), Prentice Hall, and Adobe Press to name a few. As I write this, there are over 6,100 books available. Full books. Complete with indexes, tables of content, and errata.
Having browsed a little bit through everything that's available, my current "Favorites" lineup includes Beautiful Code, MySQL - Fourth Edition, Building Web Services with Java, Programming Python - Third Edition, and A Designer's Guide to Adobe Indesign and XML.
In addition to books, there is other content available such as articles and videos. The videos were a big suprise and now are my favorite feature of Safari. I've been watching Java Fundamentals LiveLessons I and II (14 hours!) by Deitel & Assoc., Working with Color by Bruce Heavin, and Inspired CSS by Andy Clarke. Videos are availabe in both Flash and Quicktime format, and stream reasonably well (though not as well as I would like at times). Video learning is the fastest way I've found to soak up information without falling asleep or becoming distracted.
The best part of Safari: it's only $42.99 per month for a wealth of information at your fingertips. There are cheaper versions of the service ($23 - 40) called "Safari Bookshelf" in which you have a fixed number of slots (10, 20, or 30) that you can fill with books. But the Bookshelf service only includes limited access to the service and requires that you think about which books you'd like to take a look at. With "Safari Library" being complete for $43, it's the no-brainer choice.
Just today I had to do some mucking with a BIND server and instead of Googling my brains out or trying to read documentation written by programmers, I simply went to Safari and checked out DNS and BIND, 5th Edition and DNS and Bind Cookbook. These are books that I would love to have on my bookshef for just this occasion, but I don't work with BIND enough to warrant purchasing them. (Did I mention that you also get up to 35% off when purchasing a print book from Safari?)
Enough raving, the service does have its flaws.
First, reading online is hard on the eyes and simply not as comfortable as kicking back with a book. While Safari does provide an index and table of contents for every book, it can't touch flipping through real pages for ease of use.
Second, the reading experience is somewhat hampered by the intentional limits imposed by the services. For instance, you can only read one section from a chapter at a time, and you have to click the "Next" button to proceed onward. This means that you're reading the books in what are essentially little snippets which can really slow you down. Some books are actual scans and they only let you see one full page at a time. Thus if there is a corresponding photograph on the next page, you have to click "Next" to get to it (there's a photography book I'm reading and I find myself clicking the Next and Previous buttons all the time to switch between the text and images). All of this is made worse by the semi-ajaxy implementation -- there is nothing to indicate that the next page is loading after you click, and browsers like Firefox do not know how to scroll up to the top of the page when you flip to the next page.
Third, the service is kind of slow. Not unusable slow, but noticable 1-2+ second page loads slow. When you're only given a snippet to read at a time, those seconds really start to add up.
All the negatives aside, Safari is great. Having access to this amount of great information is simply amazing.
I recently signed up for O'Reilly's Safari Books Online which is an online book service providing complete titles from O'Reilly and several other technology book publishers. O'Reilly, if you didn't already know, is one of the leading publishers of technology books. You've seen their books in the Computer section of your local mega-chain bookstore -- they're the ones with the pictures of ducks and chickens and badgers and snakes on them.
Safari Books Online contains, as far as I can tell, the complete lineup of O'Reilly books and titles from other publishers such as Microsoft Press, Que, Manning (think pictures of swashbucklers and jesters), Prentice Hall, and Adobe Press to name a few. As I write this, there are over 6,100 books available. Full books. Complete with indexes, tables of content, and errata.
Having browsed a little bit through everything that's available, my current "Favorites" lineup includes Beautiful Code, MySQL - Fourth Edition, Building Web Services with Java, Programming Python - Third Edition, and A Designer's Guide to Adobe Indesign and XML.
In addition to books, there is other content available such as articles and videos. The videos were a big suprise and now are my favorite feature of Safari. I've been watching Java Fundamentals LiveLessons I and II (14 hours!) by Deitel & Assoc., Working with Color by Bruce Heavin, and Inspired CSS by Andy Clarke. Videos are availabe in both Flash and Quicktime format, and stream reasonably well (though not as well as I would like at times). Video learning is the fastest way I've found to soak up information without falling asleep or becoming distracted.
The best part of Safari: it's only $42.99 per month for a wealth of information at your fingertips. There are cheaper versions of the service ($23 - 40) called "Safari Bookshelf" in which you have a fixed number of slots (10, 20, or 30) that you can fill with books. But the Bookshelf service only includes limited access to the service and requires that you think about which books you'd like to take a look at. With "Safari Library" being complete for $43, it's the no-brainer choice.
Just today I had to do some mucking with a BIND server and instead of Googling my brains out or trying to read documentation written by programmers, I simply went to Safari and checked out DNS and BIND, 5th Edition and DNS and Bind Cookbook. These are books that I would love to have on my bookshef for just this occasion, but I don't work with BIND enough to warrant purchasing them. (Did I mention that you also get up to 35% off when purchasing a print book from Safari?)
Enough raving, the service does have its flaws.
First, reading online is hard on the eyes and simply not as comfortable as kicking back with a book. While Safari does provide an index and table of contents for every book, it can't touch flipping through real pages for ease of use.
Second, the reading experience is somewhat hampered by the intentional limits imposed by the services. For instance, you can only read one section from a chapter at a time, and you have to click the "Next" button to proceed onward. This means that you're reading the books in what are essentially little snippets which can really slow you down. Some books are actual scans and they only let you see one full page at a time. Thus if there is a corresponding photograph on the next page, you have to click "Next" to get to it (there's a photography book I'm reading and I find myself clicking the Next and Previous buttons all the time to switch between the text and images). All of this is made worse by the semi-ajaxy implementation -- there is nothing to indicate that the next page is loading after you click, and browsers like Firefox do not know how to scroll up to the top of the page when you flip to the next page.
Third, the service is kind of slow. Not unusable slow, but noticable 1-2+ second page loads slow. When you're only given a snippet to read at a time, those seconds really start to add up.
All the negatives aside, Safari is great. Having access to this amount of great information is simply amazing.
Wednesday, August 27, 2008
Installing Dell OMSA and SNMP in Ubuntu 8.04 on PowerEdge R200
This is going to be a simple step by step as I go.
Go!
1) Become root:
sudo su.
2) Edit /etc/apt/sources.list to include the following line:
deb ftp://ftp.sara.nl/pub/sara-omsa dell sara
This is an unsupported repository that has the dellomsa package.
3) apt-get update
4) Install snmp daemon and tools:
apt-get install snmp snmpd
5) Install openipmi service (I'm not sure if this is necessary, I had already installed it):
apt-get install openipmi
6) Install OMSA:
apt-get install dellomsa
7) Enable snmp in OMSA. Note: this will automagically make changes to your snmpd configuration.
/etc/init.d/dataeng enablesnmp
8) /etc/init.d/dataeng restart
9) Verify that you can connect to http://YOUR_IP:1311 . You should be able to login with a local system user.
10) IF YOU HAVE AN SAS6IR CONTROLLER DO THE FOLLOWING
10.1) Install mptctl driver:
modprobe mptctl
10.2) To load the driver at boot, edit /etc/modules and add the line:
mptctl
10.3) Initialize new driver with OMSA (or something like that):
/etc/init.d/instsvcdrv restart
10.4) Reconnect to https://YOUR_IP:1311 and verify that you can see your storage subsystem.
11) Ease up snmpd security so that you can actually use it.
Edit /etc/snmp/snmpd.conf and change:
com2sec paranoid default public
to:
com2sec readonly default public
12) Now we have to make some changes to the way Ubuntu starts snmpd. Edit /etc/default/snmpd and change:
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid 127.0.0.1'
to
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -p /var/run/snmpd.pid '
I don't know why removing the smux stuff works, but it's necessary. Removing the localhost IP address will allow snmpd to bind automatically to all interfaces. If you don't remove 127.0.0.1, you'll only be able to talk to snmpd from the local machine.
13) /etc/init.d/snmpd restart
14) /etc/init.d/dataeng restart
15) Verify that the following command spits out a whole bunch of stuff:
snmpwalk -OS -v 1 -c public localhost .1.3.6.1.4.1.674.10892.1
These should all be Dell-specific.
16) Verify that you can perform the same query from a remote machine (change localhost to the IP of the server we're setting up). Check firewall settings (UDP 161)!
17) Configure the system to start the OMSA web service when the system boots:
update-rc.d dsm_om_connsvc defaults
18) User management.
It appears as though OMSA wants you to login to the GUI via root to become OMSA Admin. To enable your root account in Ubuntu you have to do a sudo passwd root and give root a password. To disable the root account later, do a sudo passwd -l root .
To make a user a "Power User" in OMSA, you have to add them to the root group.
Go!
1) Become root:
sudo su.
2) Edit /etc/apt/sources.list to include the following line:
deb ftp://ftp.sara.nl/pub/sara-omsa dell sara
This is an unsupported repository that has the dellomsa package.
3) apt-get update
4) Install snmp daemon and tools:
apt-get install snmp snmpd
5) Install openipmi service (I'm not sure if this is necessary, I had already installed it):
apt-get install openipmi
6) Install OMSA:
apt-get install dellomsa
7) Enable snmp in OMSA. Note: this will automagically make changes to your snmpd configuration.
/etc/init.d/dataeng enablesnmp
8) /etc/init.d/dataeng restart
9) Verify that you can connect to http://YOUR_IP:1311 . You should be able to login with a local system user.
10) IF YOU HAVE AN SAS6IR CONTROLLER DO THE FOLLOWING
10.1) Install mptctl driver:
modprobe mptctl
10.2) To load the driver at boot, edit /etc/modules and add the line:
mptctl
10.3) Initialize new driver with OMSA (or something like that):
/etc/init.d/instsvcdrv restart
10.4) Reconnect to https://YOUR_IP:1311 and verify that you can see your storage subsystem.
11) Ease up snmpd security so that you can actually use it.
Edit /etc/snmp/snmpd.conf and change:
com2sec paranoid default public
to:
com2sec readonly default public
12) Now we have to make some changes to the way Ubuntu starts snmpd. Edit /etc/default/snmpd and change:
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid 127.0.0.1'
to
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -p /var/run/snmpd.pid '
I don't know why removing the smux stuff works, but it's necessary. Removing the localhost IP address will allow snmpd to bind automatically to all interfaces. If you don't remove 127.0.0.1, you'll only be able to talk to snmpd from the local machine.
13) /etc/init.d/snmpd restart
14) /etc/init.d/dataeng restart
15) Verify that the following command spits out a whole bunch of stuff:
snmpwalk -OS -v 1 -c public localhost .1.3.6.1.4.1.674.10892.1
These should all be Dell-specific.
16) Verify that you can perform the same query from a remote machine (change localhost to the IP of the server we're setting up). Check firewall settings (UDP 161)!
17) Configure the system to start the OMSA web service when the system boots:
update-rc.d dsm_om_connsvc defaults
18) User management.
It appears as though OMSA wants you to login to the GUI via root to become OMSA Admin. To enable your root account in Ubuntu you have to do a sudo passwd root and give root a password. To disable the root account later, do a sudo passwd -l root .
To make a user a "Power User" in OMSA, you have to add them to the root group.
Tuesday, August 26, 2008
Some Thoughts on Design
I'm not a great software engineer. Quite the opposite. I don't get much time for development and many of my projects are quick and dirty hack jobs. I simply wear too many hats in my work to devote myself completely to any one aspect. On a daily basis I get to be the IT guy, software developer, photographer, designer, telephone repairman, technical writer, and guy who does whatever the hell else needs to be done.
Admitting that, I do have some advice for anyone developing software or designing just about anything:
1) Use your own shit.
2) Have somebody else use your shit, and actually listen to them when they tell you what's wrong with it. Chances are, they're right.
A remember reading a blog post a while back about the world's worst toaster in which the author described a toaster that is overly complex -- a sure symptom of poor design. This post really spoke to me. Not because the message is terribly enlightening, and not because I disagree. It's stuck with me because I happen to have a toaster that is considerably worse than the one described in the blog post!
My toaster does not suffer from excessively complex design. It's your standard "pop-up" toaster with a knob that goes from "light" to "dark", and a single lever used to start the toasting process. An elegant, time-tested design that should be pretty hard to screw up. Well, "Chefmate" managed to screw it up pretty well.
Here's what I have to do to make toast:
1) Adjust the "light to dark" knob to somewhere between 50 and 75%. It doesn't really matter where, because anything under 50% is essentially untoasted (dried bread, really), and anything over 75% is burned-to-a-crisp.
2) Insert bread and push the toasting lever down.
3) Wait until the bread pops up.
4) Yay, the bread popped up! -- Not so fast, we're not done. Although the toasting process took about as long as one might expect, the bread at this point is somewhere between hot and barely-toasted. So what we need to do is push the lever down again to re-toast our bread.
5) Wait until the second toasting process "feels" like it's about half way done.
6) Pull the plug. Why pull the plug? Because an anti-safety feature built into the toaster will not allow the user to push the toasting lever up to manually pop the toast. The only way to get the toast to pop up before the toasting cycle is complete is to disrupt the power.
7) Inspect "toast" for done-ness. If it needs to toast a little longer, repeat steps 5 and 6 but vary the amount of time in step 5 as needed. Do not be tempted to adjust the light-dark knob to assist you in selecting the correct amount of time for subsequent toastings -- you will either burn it, or the bread will untoast itself and you'll have to start over (no, you simply won't be able to start toasting if you select "light").
Whew. It sucks to start the day pissed off at a toaster, but that's what I get for only spending $10. Or is it?
It amazes me that this toaster even exists in our world. It was clearly never *used* by anyone at the toaster factory to make toast, or the design would have been modified such that it could actually make toast. The anti-safety feature by which the user cannot manually pop the toast without unplugging the unit is another example of complete usability oversight. This has nothing to do with cheap components, necessarily, but shoddy design. The mechanics of a toaster are not complicated and thanks to plastic, a fairly featureless toaster is easy to mass-produce on the cheap. I have a hard time imagining the guy who was tasked with creating Chefmate's 2007 entry-level toaster -- how low can his self-esteem be?
Whenever I make toast, I think about MP3 players. The reason I do this is so when I write this blog post, I'll have a way to transition into my next example of crap design: cheap MP3 players.
My old 256MB Creative Muvo MP3 player finally died after years of faithful service. It didn't actually die, necessarily, but the "up volume" button broke off so like a racehorse with a broken leg, I had to put it down. I decided that I wanted to get another cheap MP3 player that took a single AAA battery. I've got a lot of rechargable AAA's, and I think that they're more convenient than built-in rechargable batteries. If my battery dies while I'm mowing the lawn, I can simply swap in a fresh one.
Unfortunately, nobody wants to make an MP3 player that takes AAA batteries anymore. The reason for this, I believe, is to make players that are smaller than they need to be. This is an example of a design choice that is of little benefit to anyone, but helps will player sales because for some reason "thin" is "in." Once a device reaches a certain level of "smallness", it doesn't need to be any smaller -- but on the display shelf the smallest gizmo wins.
There are only a few MP3 players being produced that take AAA batteries, and they're all on the cheapo side. I tried out a couple recently but had to return them because they lacked certain software features that they simply should not have been missing. They weren't poor because they were made cheaply, they were poor because the designers didn't care to think about the devices actually being used by real people. Granted this is certainly a symptom of keeping down costs, but just a tiny bit of effort would have saved these devices:
- Nextar MA566 (or similar model).
This is a standard MP3 player with no frills. It plays Mp3s. What more do you want? Well here's the problem: it plays the MP3s in order not by filename, but by file modification date, and THEN filename. Clearly this is a big oops, and whoever wrote the software didn't do enough testing.
I use MyPodder from Podcastready on my MP3 players to download podcasts and audiobooks. When MyPodder downloads new podcasts, it launches (as far as I can tell) about 4 download processes at once. This means that if I'm downloading 10 chapters from an audio book, they don't necessarily arrive in order. Chapter 3 might come down before Chapter 1, etc. This shouldn't be a problem because usually audio book chapters have names like 01_BookName, 02_BookName, such that any DECENT player can play the chapters back in the correct order.
Unfortunately the MA566 isn't smart enough to do get this right, and if Chapter 3 is downloaded to the device before Chapter 1, it plays Chapter 3 first. This means that at the end of any given chapter, I have to stop mowing the lawn and use the player's crappy controls to look for the next chapter (hopefully I haven't forgotten which chapter I was just listening to or I have to listen all the way through the chapter intro to decide if I'm at the right place).
One workaround was to use various filesystem utilities to reset the file modification dates. Eventually I tired of this and literally bit the player nearly in half in a fit of disgust.
- Phillips GoGear
The next cheapo MP3 player I picked up was a Phillips GoGear. Ok, so Phillips is a slightly better brand, and the unit takes a single AAA battery. Shouldn't have any problems then, right?
Much to my complete shock, this player was missing a feature that is so common that it's more of a given than a special feature: the ability to resume playback. How in the hell does this happen? When you turn on the GoGear MP3 player it doesn't start playing from where you left off, it starts you over from scratch. I went through all the options and could find no way to modify this behavior. This little oversight makes the player completely useless for listening to podcasts and audio books!
Recalling the bad taste of the Nextar player as my teeth crunched through it, I decided to return this piece of crap pronto.
- Sansa Clip
Feeling quite discouraged, I gave up my quest for an MP3 player that uses AAA batteries and ended up with a 4GB Sansa "Clip". I'm mostly satisfied with this player and have promised myself to keep it plugged in to my USB charger when I'm not using it. There's nothing worse than mowing the lawn without headphones because of a dead battery. (You may have noticed several references to lawn mowing. Yes, my lawn is very large. Yes, I am clearly so cheap that I use a pushmower. It takes hours and hours. Someday I'll tell you about trying to buy a cheap lawnmower that doesn't suck!)
The only problem I've had thus far with the Sansa Clip is that the clip comes off. That's right. The "clip" is such an important feature of this unit that the model number is literally CLIP, yet the clip falls right if you twist it just so. Way to think it through fellas.
I've gone on long enough. To recap: it doesn't matter how cheap or expensive the software or gizmo you're creating is, remember that human beings are going to spend their money on it and actually use it. 1) Try using it yourself. Does it work? 2) Consider more than your own personal use case. Does it work for others? Sometimes it's just a minor change that stands between usable and useless.
Admitting that, I do have some advice for anyone developing software or designing just about anything:
1) Use your own shit.
2) Have somebody else use your shit, and actually listen to them when they tell you what's wrong with it. Chances are, they're right.
A remember reading a blog post a while back about the world's worst toaster in which the author described a toaster that is overly complex -- a sure symptom of poor design. This post really spoke to me. Not because the message is terribly enlightening, and not because I disagree. It's stuck with me because I happen to have a toaster that is considerably worse than the one described in the blog post!
My toaster does not suffer from excessively complex design. It's your standard "pop-up" toaster with a knob that goes from "light" to "dark", and a single lever used to start the toasting process. An elegant, time-tested design that should be pretty hard to screw up. Well, "Chefmate" managed to screw it up pretty well.
Here's what I have to do to make toast:
1) Adjust the "light to dark" knob to somewhere between 50 and 75%. It doesn't really matter where, because anything under 50% is essentially untoasted (dried bread, really), and anything over 75% is burned-to-a-crisp.
2) Insert bread and push the toasting lever down.
3) Wait until the bread pops up.
4) Yay, the bread popped up! -- Not so fast, we're not done. Although the toasting process took about as long as one might expect, the bread at this point is somewhere between hot and barely-toasted. So what we need to do is push the lever down again to re-toast our bread.
5) Wait until the second toasting process "feels" like it's about half way done.
6) Pull the plug. Why pull the plug? Because an anti-safety feature built into the toaster will not allow the user to push the toasting lever up to manually pop the toast. The only way to get the toast to pop up before the toasting cycle is complete is to disrupt the power.
7) Inspect "toast" for done-ness. If it needs to toast a little longer, repeat steps 5 and 6 but vary the amount of time in step 5 as needed. Do not be tempted to adjust the light-dark knob to assist you in selecting the correct amount of time for subsequent toastings -- you will either burn it, or the bread will untoast itself and you'll have to start over (no, you simply won't be able to start toasting if you select "light").
Whew. It sucks to start the day pissed off at a toaster, but that's what I get for only spending $10. Or is it?
It amazes me that this toaster even exists in our world. It was clearly never *used* by anyone at the toaster factory to make toast, or the design would have been modified such that it could actually make toast. The anti-safety feature by which the user cannot manually pop the toast without unplugging the unit is another example of complete usability oversight. This has nothing to do with cheap components, necessarily, but shoddy design. The mechanics of a toaster are not complicated and thanks to plastic, a fairly featureless toaster is easy to mass-produce on the cheap. I have a hard time imagining the guy who was tasked with creating Chefmate's 2007 entry-level toaster -- how low can his self-esteem be?
Whenever I make toast, I think about MP3 players. The reason I do this is so when I write this blog post, I'll have a way to transition into my next example of crap design: cheap MP3 players.
My old 256MB Creative Muvo MP3 player finally died after years of faithful service. It didn't actually die, necessarily, but the "up volume" button broke off so like a racehorse with a broken leg, I had to put it down. I decided that I wanted to get another cheap MP3 player that took a single AAA battery. I've got a lot of rechargable AAA's, and I think that they're more convenient than built-in rechargable batteries. If my battery dies while I'm mowing the lawn, I can simply swap in a fresh one.
Unfortunately, nobody wants to make an MP3 player that takes AAA batteries anymore. The reason for this, I believe, is to make players that are smaller than they need to be. This is an example of a design choice that is of little benefit to anyone, but helps will player sales because for some reason "thin" is "in." Once a device reaches a certain level of "smallness", it doesn't need to be any smaller -- but on the display shelf the smallest gizmo wins.
There are only a few MP3 players being produced that take AAA batteries, and they're all on the cheapo side. I tried out a couple recently but had to return them because they lacked certain software features that they simply should not have been missing. They weren't poor because they were made cheaply, they were poor because the designers didn't care to think about the devices actually being used by real people. Granted this is certainly a symptom of keeping down costs, but just a tiny bit of effort would have saved these devices:
- Nextar MA566 (or similar model).
This is a standard MP3 player with no frills. It plays Mp3s. What more do you want? Well here's the problem: it plays the MP3s in order not by filename, but by file modification date, and THEN filename. Clearly this is a big oops, and whoever wrote the software didn't do enough testing.
I use MyPodder from Podcastready on my MP3 players to download podcasts and audiobooks. When MyPodder downloads new podcasts, it launches (as far as I can tell) about 4 download processes at once. This means that if I'm downloading 10 chapters from an audio book, they don't necessarily arrive in order. Chapter 3 might come down before Chapter 1, etc. This shouldn't be a problem because usually audio book chapters have names like 01_BookName, 02_BookName, such that any DECENT player can play the chapters back in the correct order.
Unfortunately the MA566 isn't smart enough to do get this right, and if Chapter 3 is downloaded to the device before Chapter 1, it plays Chapter 3 first. This means that at the end of any given chapter, I have to stop mowing the lawn and use the player's crappy controls to look for the next chapter (hopefully I haven't forgotten which chapter I was just listening to or I have to listen all the way through the chapter intro to decide if I'm at the right place).
One workaround was to use various filesystem utilities to reset the file modification dates. Eventually I tired of this and literally bit the player nearly in half in a fit of disgust.
- Phillips GoGear
The next cheapo MP3 player I picked up was a Phillips GoGear. Ok, so Phillips is a slightly better brand, and the unit takes a single AAA battery. Shouldn't have any problems then, right?
Much to my complete shock, this player was missing a feature that is so common that it's more of a given than a special feature: the ability to resume playback. How in the hell does this happen? When you turn on the GoGear MP3 player it doesn't start playing from where you left off, it starts you over from scratch. I went through all the options and could find no way to modify this behavior. This little oversight makes the player completely useless for listening to podcasts and audio books!
Recalling the bad taste of the Nextar player as my teeth crunched through it, I decided to return this piece of crap pronto.
- Sansa Clip
Feeling quite discouraged, I gave up my quest for an MP3 player that uses AAA batteries and ended up with a 4GB Sansa "Clip". I'm mostly satisfied with this player and have promised myself to keep it plugged in to my USB charger when I'm not using it. There's nothing worse than mowing the lawn without headphones because of a dead battery. (You may have noticed several references to lawn mowing. Yes, my lawn is very large. Yes, I am clearly so cheap that I use a pushmower. It takes hours and hours. Someday I'll tell you about trying to buy a cheap lawnmower that doesn't suck!)
The only problem I've had thus far with the Sansa Clip is that the clip comes off. That's right. The "clip" is such an important feature of this unit that the model number is literally CLIP, yet the clip falls right if you twist it just so. Way to think it through fellas.
I've gone on long enough. To recap: it doesn't matter how cheap or expensive the software or gizmo you're creating is, remember that human beings are going to spend their money on it and actually use it. 1) Try using it yourself. Does it work? 2) Consider more than your own personal use case. Does it work for others? Sometimes it's just a minor change that stands between usable and useless.
Friday, August 22, 2008
A Note About Migrating OpenVZ VPSs
I had two primary reasons for adopting OpenVZ to provide server virtualization:
1) I like the idea of being able to "freeze" a complete server and resume it later, especially on different hardware. This leads to some very interesting backup / restore possibilities, in that an entire "server" and the application it is responsible for are all packaged up into one independent unit. I don't actually need virtualization for consolidation reasons.
2) Full blown virtualization solutions are extremely resource intensive, and the added overhead is just not worth the benefit in my case. I would have to buy new servers! OpenVZ solves this problem nicely.
So of course I'm having a good time suspending and resuming servers (checkpointing), and have just recently been playing with migration of servers between hardware nodes. The way that OpenVZ makes this happen is actually not terribly elegant. Nothing about OpenVZ is elegant, and I find there's always a little bit of unrest in the back of my mind about whether OpenVZ isn't going to bite me in the ass sometime down the road.
On to the notes:
When performing a migration between hardware nodes, it's extremely important that the hardware node be configured in the same or similar manner as the node you're migrating from. OpenVZ is not smart enough to configure itself during a migration, so in addition to configuring the server OS itself, you must also configure OpenVZ correctly or your migrations will not happen.
The things that got me were this:
1) Disk quotas. By default an OpenVZ install will be configured with disk quotas turned on, and the default quota is not large (it seems that there is a global quota (the one I'm talking about), and individual per-container quotas). You have to either disable quotas on the new node, or crank them up high enough to accept the VPS you're bringing in. These settings for the global quotas are in vz.conf which is probably somewhere like /etc/vz/vz.conf.
2) For the life of me I could not get a migration to work via a dump file. This concerns me greatly. I'm not sure if you're supposed to create a dummy container on the new node with the same number as the node you're migrating, but the restore wouldn't work without the proper private directory existing. You also need to bring over the VPS's conf file and put it in the right spot ( maybe /etc/vz/conf ) before trying the restore.
The instructions here: http://wiki.openvz.org/Checkpointing_and_live_migration do not work. There are things that need to happen between the dump and the restore that are not addressed in the wiki article.
3) You need to configure networking properly on the new hardware node before migrating. In my case, I had to make sure that iptables was setup the same way, and also mirror settings from the old /etc/sysctl.conf file. At a minimum, make sure you set net.ipv4.ip_forward = 1.
There are more sysctl parameter suggestions in the OpenVZ Users Guide also. (Note: this guide is extremely light on the issue of migration)
4) The online migration option seems to work if you follow exactly the instructions in this wiki article.
Note to Ubuntu Server users: you have to enable the root user on the new node in order to do an online migration. sudo passwd root
Don't be too hasty to assume that online migration is the only tool you need to migrate from one hardware node to another. Consider the case where the old hardware node has caught fire and burns down your entire compound. After you've rebuilt, you'd better make sure that you can restore your backed up VPSs from dump files to your new servers. As I mentioned earlier, I can't seem to make it work.
5) When performing an online migration with default settings, your "old" VPS will be removed!! Be aware of this if you're just playing around! See the options for vzmigrate.
6) OpenVZ error messages are about as useful as a milk bucket under a bull. Turning on verbose doesn't do anything with some commands. I got error messages that do not even exist in the google index! I got error messages that just said "Error:"!
This software works pretty well, but I really don't trust it yet.
1) I like the idea of being able to "freeze" a complete server and resume it later, especially on different hardware. This leads to some very interesting backup / restore possibilities, in that an entire "server" and the application it is responsible for are all packaged up into one independent unit. I don't actually need virtualization for consolidation reasons.
2) Full blown virtualization solutions are extremely resource intensive, and the added overhead is just not worth the benefit in my case. I would have to buy new servers! OpenVZ solves this problem nicely.
So of course I'm having a good time suspending and resuming servers (checkpointing), and have just recently been playing with migration of servers between hardware nodes. The way that OpenVZ makes this happen is actually not terribly elegant. Nothing about OpenVZ is elegant, and I find there's always a little bit of unrest in the back of my mind about whether OpenVZ isn't going to bite me in the ass sometime down the road.
On to the notes:
When performing a migration between hardware nodes, it's extremely important that the hardware node be configured in the same or similar manner as the node you're migrating from. OpenVZ is not smart enough to configure itself during a migration, so in addition to configuring the server OS itself, you must also configure OpenVZ correctly or your migrations will not happen.
The things that got me were this:
1) Disk quotas. By default an OpenVZ install will be configured with disk quotas turned on, and the default quota is not large (it seems that there is a global quota (the one I'm talking about), and individual per-container quotas). You have to either disable quotas on the new node, or crank them up high enough to accept the VPS you're bringing in. These settings for the global quotas are in vz.conf which is probably somewhere like /etc/vz/vz.conf.
2) For the life of me I could not get a migration to work via a dump file. This concerns me greatly. I'm not sure if you're supposed to create a dummy container on the new node with the same number as the node you're migrating, but the restore wouldn't work without the proper private directory existing. You also need to bring over the VPS's conf file and put it in the right spot ( maybe /etc/vz/conf ) before trying the restore.
The instructions here: http://wiki.openvz.org/Checkpointing_and_live_migration do not work. There are things that need to happen between the dump and the restore that are not addressed in the wiki article.
3) You need to configure networking properly on the new hardware node before migrating. In my case, I had to make sure that iptables was setup the same way, and also mirror settings from the old /etc/sysctl.conf file. At a minimum, make sure you set net.ipv4.ip_forward = 1.
There are more sysctl parameter suggestions in the OpenVZ Users Guide also. (Note: this guide is extremely light on the issue of migration)
4) The online migration option seems to work if you follow exactly the instructions in this wiki article.
Note to Ubuntu Server users: you have to enable the root user on the new node in order to do an online migration. sudo passwd root
Don't be too hasty to assume that online migration is the only tool you need to migrate from one hardware node to another. Consider the case where the old hardware node has caught fire and burns down your entire compound. After you've rebuilt, you'd better make sure that you can restore your backed up VPSs from dump files to your new servers. As I mentioned earlier, I can't seem to make it work.
5) When performing an online migration with default settings, your "old" VPS will be removed!! Be aware of this if you're just playing around! See the options for vzmigrate.
6) OpenVZ error messages are about as useful as a milk bucket under a bull. Turning on verbose doesn't do anything with some commands. I got error messages that do not even exist in the google index! I got error messages that just said "Error:"!
This software works pretty well, but I really don't trust it yet.
Thursday, August 14, 2008
DanielStallmanAliceLippmann/Richard Stallman
Would everyone please stop calling DanielStallmanAliceLippmann/Richard Stallman "Richard Stallman?" After all, Daniel Stallman and Alice Lippmann were the first to start development on DanielStallmanAliceLippmann/Richard Stallman. It's only fair to give credit where credit is due.
FACTOID: Did you know that 50% of DSAL/Richard Stallman was created by Daniel Stallman, and the other 50% by Alice Lippmann? Richard Stallman is responsible for 0% of the creation of Richard Stallman, yet he still runs around calling himself Richard Stallman.
DSAL/Richard Stallman, for those no in the know, is responsible the GNU/Linux operating system. The GNU/Linux operating system is Linux with a GNU and a forward-slash in front of it.
FACTOID: Did you know that 50% of DSAL/Richard Stallman was created by Daniel Stallman, and the other 50% by Alice Lippmann? Richard Stallman is responsible for 0% of the creation of Richard Stallman, yet he still runs around calling himself Richard Stallman.
DSAL/Richard Stallman, for those no in the know, is responsible the GNU/Linux operating system. The GNU/Linux operating system is Linux with a GNU and a forward-slash in front of it.
Tuesday, August 12, 2008
rm -rf /opt/liferay
Sheesh, another Liferay install down the tubes. This is getting ridiculous.
Unfortunately I can't find another portal application with a half way decent document management system included. If anyone knows of anything, let me know. I'm currently downloading the just-released Liferay 5.1.1 which I'll be hating in approximately ten minutes.
Here's some advise for anyone who is thinking about using Liferay:
1) Liferay is extremely convoluted. The end-user system is organized the way a Java programmer might organize his source files. This does not lend itself to useability. Your users, if not extremely competent, won't know what the hell is going on or where they are in the system half the time.
2) Liferay is extremely customizable, but not in a meaningful way. The concept of a "personal space" and a "public" space within the portal all exist, as you might expect, but merging the two concepts into one concise "website" is very difficult.
3) The permissions system has fine-grained itself into sand. You have users, roles, groups, communities, and organizations all duking it out at once over private and public pages organized by user, community, and organization.
A prime example of how this fails is this: you can add users to an organization, and then add organizations to a community. This sounds good, especially for an extranet application. However, just because a user belongs to an organization that belongs to a community does not mean that the various portlets within the community will recognize that the user is part of the organization that belongs to the community. Whew, that's a mouthful. Here's what I mean: if you create a community-wide announcement, users who only belong to organizations that belong to the community will not see the announcement, because the system doesn't realize that they are implicitly part of the community. One way around this is to create a catch-all user group where you dump all of the users who belong to the various organizations that are part of your community. You should also then create a group for each organization in case you want to send announcements to individual organizations through the same announcement instance. Ok, so you're now managing three groups in addition to your community and organizations. Why? Who the hell knows. It's a nightmare.
4) Extending the system is a horrible thing. Just don't, unless it's going to be your full time job. Every single Java technology that has ever escaped from Java Hell is used behind the scenes. You're going to have to know them all. In depth. There are so many references to references to references that you're going to have half a dozen files open just trying to figure out how to make one silly change. Be careful, do something wrong, even in an external portlet, and you could break your install. (why do you think I'm here bitching while waiting for 5.1.1 to download!?) And of course in Java web development style, every little change you try out means restarting the application server, regardless of whether it's in an actual java class or just raw HTML in some template. Wearing a watch? Don't.
5) Documentation was good at around 4.4. Now it sucks. And the fact that liferay.ca expired and the last Wiki overhaul broke every link in the universe, don't even bother using Google. The Wiki isn't getting updated very well. Development documentation is awful. Just awful. The days of the Lifecast seem to be over.
6) The community is virtually non-existant. I would speculate that 70-90% of all questions asked in the forums go unanswered or unresolved. People posting code examples don't often check their work (because it would take about an hour to do so!). The forum software itself doesn't handle code examples at all because they're using a stupid fixed width template.
7) It's a resource hog. Most Java portals are. It's amazing at all the code that runs just to render a typical payload. They've got all their bases covered, for sure, having used every existing technology all at once, but unfortunately there's only one guy playing, and he's standing in right field with his finger in his nose looking up at some birds.
7) What's with the Christian portlets? I mean, Jesus H Christ, nobody ever mentions this. It's a bit odd. Is it a statement? "Look, we know this shit is hard... so here's a Prayer Portlet."
The sad thing is that Liferay seems to be miles ahead of other opensource portal solutions. Nothing would make me happier than a complete feature freeze and a complete examination of how the system is actually being used by, like, people 'n stuff. I really think that all of the features are there, they just need to work properly and take into consideration more than one use case.
As it stands I am very close to keeping Sharepoint as my internal portal even though I really don't like being locked into Microsoft products. The project I'm currently using Liferay for is an extranet and not entirely critical. Based on what I've learned from the experience there's almost no way that I can deploy this software internally and expect it to function well in the short or long term. (BTW, this little project of mine is supposed to be demoed tomorrow. Haha. haha ahdhhahaha haha ah ah ahha aha ah aha aaaaaahhhhhhhhhhhhhhh ha.)
Oop, my download is done. Ahh, nothing like a fresh install to reinstate my sense of hope.
Back in ten minutes...
Unfortunately I can't find another portal application with a half way decent document management system included. If anyone knows of anything, let me know. I'm currently downloading the just-released Liferay 5.1.1 which I'll be hating in approximately ten minutes.
Here's some advise for anyone who is thinking about using Liferay:
1) Liferay is extremely convoluted. The end-user system is organized the way a Java programmer might organize his source files. This does not lend itself to useability. Your users, if not extremely competent, won't know what the hell is going on or where they are in the system half the time.
2) Liferay is extremely customizable, but not in a meaningful way. The concept of a "personal space" and a "public" space within the portal all exist, as you might expect, but merging the two concepts into one concise "website" is very difficult.
3) The permissions system has fine-grained itself into sand. You have users, roles, groups, communities, and organizations all duking it out at once over private and public pages organized by user, community, and organization.
A prime example of how this fails is this: you can add users to an organization, and then add organizations to a community. This sounds good, especially for an extranet application. However, just because a user belongs to an organization that belongs to a community does not mean that the various portlets within the community will recognize that the user is part of the organization that belongs to the community. Whew, that's a mouthful. Here's what I mean: if you create a community-wide announcement, users who only belong to organizations that belong to the community will not see the announcement, because the system doesn't realize that they are implicitly part of the community. One way around this is to create a catch-all user group where you dump all of the users who belong to the various organizations that are part of your community. You should also then create a group for each organization in case you want to send announcements to individual organizations through the same announcement instance. Ok, so you're now managing three groups in addition to your community and organizations. Why? Who the hell knows. It's a nightmare.
4) Extending the system is a horrible thing. Just don't, unless it's going to be your full time job. Every single Java technology that has ever escaped from Java Hell is used behind the scenes. You're going to have to know them all. In depth. There are so many references to references to references that you're going to have half a dozen files open just trying to figure out how to make one silly change. Be careful, do something wrong, even in an external portlet, and you could break your install. (why do you think I'm here bitching while waiting for 5.1.1 to download!?) And of course in Java web development style, every little change you try out means restarting the application server, regardless of whether it's in an actual java class or just raw HTML in some template. Wearing a watch? Don't.
5) Documentation was good at around 4.4. Now it sucks. And the fact that liferay.ca expired and the last Wiki overhaul broke every link in the universe, don't even bother using Google. The Wiki isn't getting updated very well. Development documentation is awful. Just awful. The days of the Lifecast seem to be over.
6) The community is virtually non-existant. I would speculate that 70-90% of all questions asked in the forums go unanswered or unresolved. People posting code examples don't often check their work (because it would take about an hour to do so!). The forum software itself doesn't handle code examples at all because they're using a stupid fixed width template.
7) It's a resource hog. Most Java portals are. It's amazing at all the code that runs just to render a typical payload. They've got all their bases covered, for sure, having used every existing technology all at once, but unfortunately there's only one guy playing, and he's standing in right field with his finger in his nose looking up at some birds.
7) What's with the Christian portlets? I mean, Jesus H Christ, nobody ever mentions this. It's a bit odd. Is it a statement? "Look, we know this shit is hard... so here's a Prayer Portlet."
The sad thing is that Liferay seems to be miles ahead of other opensource portal solutions. Nothing would make me happier than a complete feature freeze and a complete examination of how the system is actually being used by, like, people 'n stuff. I really think that all of the features are there, they just need to work properly and take into consideration more than one use case.
As it stands I am very close to keeping Sharepoint as my internal portal even though I really don't like being locked into Microsoft products. The project I'm currently using Liferay for is an extranet and not entirely critical. Based on what I've learned from the experience there's almost no way that I can deploy this software internally and expect it to function well in the short or long term. (BTW, this little project of mine is supposed to be demoed tomorrow. Haha. haha ahdhhahaha haha ah ah ahha aha ah aha aaaaaahhhhhhhhhhhhhhh ha.)
Oop, my download is done. Ahh, nothing like a fresh install to reinstate my sense of hope.
Back in ten minutes...
Tuesday, August 5, 2008
Oh GNU You're So Awesome!
It's called "Linux", get over it.
We're all comfortable with it. We all know what it means. Tooting your own horn just makes you sound like, well, someone tooting his own horn.
Look, we're all impressed that GNU is responsible for about 30% of any "Linux" operating system. And thankfully, the remaining majority of contributors don't seem to be quite so self-obsessed.
Here's a tip: you should have picked a better name.
Really, Mr. Stallman, do you lay awake at night thinking of such petty things?
We're all comfortable with it. We all know what it means. Tooting your own horn just makes you sound like, well, someone tooting his own horn.
Look, we're all impressed that GNU is responsible for about 30% of any "Linux" operating system. And thankfully, the remaining majority of contributors don't seem to be quite so self-obsessed.
Here's a tip: you should have picked a better name.
Really, Mr. Stallman, do you lay awake at night thinking of such petty things?
Monday, August 4, 2008
Alfresco - "Commercial Open Source"
So I don't want to sound like too much of a hater, although the purpose of this blog is to give me a place to vent, but I'm very unimpressed with Alfresco's "open source" business model.
While the Church of the Free Software Foundation would have us believe that source code is a basic human right, other companies like Alfresco have a more restrictive view when it comes to the licensing of their code. And really I'm ok with that. Just because something is trivial to duplicate and distribute does not mean that it has no monetary value. Work is work.
What I do have a problem with are various voices from the Alfresco blogs ripping on closed source vendors and patting themselves on the back as if they were on the same level as "real" open source companies like, for instance, the Apache Software Foundation. There's a big difference between commercial open source companies like Alfresco and less restrictive entities like Apache.
Alfresco maintains two open source branches of their software. One for the so-called community, and one for the Starship Enterprise (i.e. paying customers). What's wrong with that?
Well, for starters I've come across bugs in Community version of Alfresco that I know are fixed in Enterprise. Even though the Community or "Labs" version of Alfresco continues to plow along and is nearly at version 3.0, show-stopper bug fixes from Enterprise 2.1+ are still not merged into Community. Ok, it's open source, so I can fix them, right?
Let me ask you this: why should I? If I were to fix the JSR-168 authentication flaw that's holding me back from actually using Alfresco, then I would be duplicating work that has already been done. Not only that, but the severity of the flaw would almost certainly mean that my fix would be of lesser quality than the fix that already exists because I am not an Alfresco developer. Even if I did fix it, and fix it well, there's nobody to give my fix back to because Alfresco doesn't take a lot of community contributions, and (have I mentioned this?), it's already fixed.
So let's talk about the Alfresco Community for a moment. Where is it? I'm seeing an outdated Wiki and some very inactive forums where moderators spend more time moving threads around to "appropriate" forums than answering questions. The use of the word Community here is a bit of a stretch, and has no greater value than I'd expect to get from a closed-source product. The Alfresco Community is less of an "open source community" and more like a gathering of frustrated but oddly grateful beta testers.
In addition to these not-so-opensourcey qualities of Alfresco is the biggest slap in the face of all: the cute little "nag screen" at the bottom of every page rendered by the Community version. You know, the one that reads: "This software might not be any good because the company running it hasn't paid any money." While much of the look and feel of Alfresco can be easily changed, this little gem is actually buried in the Java source. On purpose. To irritate you into paying. Cute.
"Dude, why don't you just shut up and pay for the software?"
I'm not convinced that it's worth paying for yet. If Alfresco has a similar pricing structure to other open source commercial/support companies, then it's probably more expensive than its closed source competitors. Whoa, did I say that? Yeah. Despite what you might have been led to believe, products like Sharepoint from terrible companies like Microsoft are actually dirt cheap for small to medium sized companies. Sure, you don't get free support with the license fees, but at least I'm not forced to pay extra for something I haven't to this day had to use.
So why should I fork over the cash if the "demo" version I'm using doesn't even correctly implement a fairly straight-foward feature? A feature that is actually specified in a formal standard no less!
"Yeah, but it's worth the price because it's open source, which is all open and not all closed 'n' stuff."
I love open source, but in this case I'm not really sure what the benefit is. I don't use open source software because I'm afraid that some day the vendor will go out of business and I'll be able to continue modifying the software myself. I'm certainly not going to just assume that the "community" is going to pick it up and save the day. I do like open source because I can modify the behavior of the software and fix bugs if need be, but this is really something that I rarely want to do. I'm interested in the software because I want to use the software. And I certainly don't want to be fixing bugs in software that I'm paying support fees for.
If an open source application isn't fully open source then its appeal is greatly diminished. The community around it is diminished. It's really only slightly more interesting than closed source alternatives.
"Ok, so you're still saying that Alfresco is more intersting than its closed source competitors. What the hell is your problem?"
Well, frankly I'm just sick and tired of the Alfresco bloggers ripping on closed source companies. Alfresco is not a community driven open source project, and I certainly don't feel that they have any business taking the moral highground and ripping on their closed source competition. Suck it up, Alfresco, and open the the real source. Microsoft might have its head up its ass when it comes to understanding the benefits of open source, but you've only got one eye peeking out.
"Does it make you feel better to make up these insulting questions that I, the reader, am supposedly dying to ask you?"
Why, yes. Yes it does. Thanks.
While the Church of the Free Software Foundation would have us believe that source code is a basic human right, other companies like Alfresco have a more restrictive view when it comes to the licensing of their code. And really I'm ok with that. Just because something is trivial to duplicate and distribute does not mean that it has no monetary value. Work is work.
What I do have a problem with are various voices from the Alfresco blogs ripping on closed source vendors and patting themselves on the back as if they were on the same level as "real" open source companies like, for instance, the Apache Software Foundation. There's a big difference between commercial open source companies like Alfresco and less restrictive entities like Apache.
Alfresco maintains two open source branches of their software. One for the so-called community, and one for the Starship Enterprise (i.e. paying customers). What's wrong with that?
Well, for starters I've come across bugs in Community version of Alfresco that I know are fixed in Enterprise. Even though the Community or "Labs" version of Alfresco continues to plow along and is nearly at version 3.0, show-stopper bug fixes from Enterprise 2.1+ are still not merged into Community. Ok, it's open source, so I can fix them, right?
Let me ask you this: why should I? If I were to fix the JSR-168 authentication flaw that's holding me back from actually using Alfresco, then I would be duplicating work that has already been done. Not only that, but the severity of the flaw would almost certainly mean that my fix would be of lesser quality than the fix that already exists because I am not an Alfresco developer. Even if I did fix it, and fix it well, there's nobody to give my fix back to because Alfresco doesn't take a lot of community contributions, and (have I mentioned this?), it's already fixed.
So let's talk about the Alfresco Community for a moment. Where is it? I'm seeing an outdated Wiki and some very inactive forums where moderators spend more time moving threads around to "appropriate" forums than answering questions. The use of the word Community here is a bit of a stretch, and has no greater value than I'd expect to get from a closed-source product. The Alfresco Community is less of an "open source community" and more like a gathering of frustrated but oddly grateful beta testers.
In addition to these not-so-opensourcey qualities of Alfresco is the biggest slap in the face of all: the cute little "nag screen" at the bottom of every page rendered by the Community version. You know, the one that reads: "This software might not be any good because the company running it hasn't paid any money." While much of the look and feel of Alfresco can be easily changed, this little gem is actually buried in the Java source. On purpose. To irritate you into paying. Cute.
"Dude, why don't you just shut up and pay for the software?"
I'm not convinced that it's worth paying for yet. If Alfresco has a similar pricing structure to other open source commercial/support companies, then it's probably more expensive than its closed source competitors. Whoa, did I say that? Yeah. Despite what you might have been led to believe, products like Sharepoint from terrible companies like Microsoft are actually dirt cheap for small to medium sized companies. Sure, you don't get free support with the license fees, but at least I'm not forced to pay extra for something I haven't to this day had to use.
So why should I fork over the cash if the "demo" version I'm using doesn't even correctly implement a fairly straight-foward feature? A feature that is actually specified in a formal standard no less!
"Yeah, but it's worth the price because it's open source, which is all open and not all closed 'n' stuff."
I love open source, but in this case I'm not really sure what the benefit is. I don't use open source software because I'm afraid that some day the vendor will go out of business and I'll be able to continue modifying the software myself. I'm certainly not going to just assume that the "community" is going to pick it up and save the day. I do like open source because I can modify the behavior of the software and fix bugs if need be, but this is really something that I rarely want to do. I'm interested in the software because I want to use the software. And I certainly don't want to be fixing bugs in software that I'm paying support fees for.
If an open source application isn't fully open source then its appeal is greatly diminished. The community around it is diminished. It's really only slightly more interesting than closed source alternatives.
"Ok, so you're still saying that Alfresco is more intersting than its closed source competitors. What the hell is your problem?"
Well, frankly I'm just sick and tired of the Alfresco bloggers ripping on closed source companies. Alfresco is not a community driven open source project, and I certainly don't feel that they have any business taking the moral highground and ripping on their closed source competition. Suck it up, Alfresco, and open the the real source. Microsoft might have its head up its ass when it comes to understanding the benefits of open source, but you've only got one eye peeking out.
"Does it make you feel better to make up these insulting questions that I, the reader, am supposedly dying to ask you?"
Why, yes. Yes it does. Thanks.
Wednesday, July 16, 2008
DIE WIKI DIE
A Wiki is good for recording and storing all of the shit that comes out of your brain.
A Wiki is a terrible place for managing and showcasing all of the shit that comes out of your brain.
The Wiki love affair has got to stop. Please, make it stop. At least stop using Wikis as your primary technical support system.
1) Every wiki is out of date. It doesn't matter how much information you've got in your Wiki if much of it no longer applies.
2) Wikis have no good mechanism to determine what content is out of date. They also have no mechanism to convey to the reader that the information may be out of date. A Wiki relies solely on the writer to maintain the validity of the information, and the writer can't do that, as evident by his use of a Wiki to manage information. What's so bad about out-of-date information? It costs your consumers time. Lots and lots of time.
3) Wikis encourage "yada yada" explanation.
4) Wikis disrupt discussion forums, which are vastly more immediately useful. (Reply: it's in the wiki).
5) Wikis encourage bad information management through the very feature that makes a Wiki unique and useful.
6) Wikis are often difficult to navigate and search, as Wikis are often designed with the Wiki writer in mind.
In conclusion, I hate Wikis. If you need a helpdesk application, then use one.
A Wiki is a terrible place for managing and showcasing all of the shit that comes out of your brain.
The Wiki love affair has got to stop. Please, make it stop. At least stop using Wikis as your primary technical support system.
1) Every wiki is out of date. It doesn't matter how much information you've got in your Wiki if much of it no longer applies.
2) Wikis have no good mechanism to determine what content is out of date. They also have no mechanism to convey to the reader that the information may be out of date. A Wiki relies solely on the writer to maintain the validity of the information, and the writer can't do that, as evident by his use of a Wiki to manage information. What's so bad about out-of-date information? It costs your consumers time. Lots and lots of time.
3) Wikis encourage "yada yada" explanation.
4) Wikis disrupt discussion forums, which are vastly more immediately useful. (Reply: it's in the wiki).
5) Wikis encourage bad information management through the very feature that makes a Wiki unique and useful.
6) Wikis are often difficult to navigate and search, as Wikis are often designed with the Wiki writer in mind.
In conclusion, I hate Wikis. If you need a helpdesk application, then use one.
Liferay + Alfresco = Lalfrescoray
Hey what's that, some egg on my face?
My comments about the Liferay WebDAV support in this whiny article are incorrect. As of Liferay 5.1 WebDAV does seem to work since I replaced a goofy NIC in my server. Please see the article comments for more information.
*************************************
Once again I find myself stuck between two partial solutions that I can't combine to solve what seems to me to be a rather simple problem. Here it is:
1) I need a "portal".
2) I need a document management "portlet".
3) I must access the document management system via CIFS or WebDAV.
4) The system must be somewhat seamless; single sign on at least.
After reviewing some options, I decided that Liferay should be able to meet my requirements. It's a "portal" with lots of "portlets" included, including document management. (notice my sarcastic quotation marks)
DENIED: Liferay's WebDAV doesn't work well enough to be considered well enough.
Ok ok, I've heard a lot of talk on the ol' blogosphere about integrating Alfresco and Liferay. From all the buzz it must be a pretty good solution, especially with all this "web script" stuff on the Alfresco side. Plus Alfresco has some great features that I can take advantage of, and I know that it's CIFS support works because I've used it in the past.
FAIL. Liferay and Alfresco do not mix. Like, not at all. Well, not if you want to use Alfresco 2.1 Community or better. (BTW, the latest Liferay Alfresco Content Portlet 5.0.0.1 uses Alfresco 2.0) Ok ok, I'm being a little too negative. They do mix I'm sure if you write your own authentication filter to make SSO work. No problem!
You see, both Liferay and Alfresco love talking it up about standards support, like JSR-168, but it seems that neither of them pull it off well enough to, you know, INTEROPERATE.
The guy (guys?) at Cignex have a supposed solution using CAS between the two, but Alfresco doesn't jive with CAS as easily as Liferay, and I couldn't get this working reliably for SSO. I even bought this book called "Liferay Portal Enterprise Intranets" by Jonas Yuan in anticipation of the Alfresco integration section. What a complete bust and $60 down the drain. The book isn't necessarily bad, but the devil is in the details and the book is unfortunately devil-free.
Not suprisingly, another solution is to simply purchase Alfresco Enterprise, because it uses a different code base than community. In fact, Alfresco Enterprise 2.2+ will work with Liferay while Alfresco Community 2.9B will not. Open source? Sorta.
Frankly, I'm a little bit suprised to see the world of open source portals and document mangagement systems still in such a ridiculous state. Liferay has more bugs than a volkswagon dealer and both Liferay and Alfresco are stupidly difficult to configure in any interesting way, as all Java-based web applications seem to be. XML NIGHTMARE! Here's a tip: if it takes weeks just to understand how to configure your software, then your software isn't finished. Excessive meta configuration files are not a feature. Most of your users don't give a shit about Java Beans or any other beans for that matter.
Community? Not very good in either camp. Outdated documents, wikis (DIE WIKI DIE) and very poor forum support.
Not to just pick on Liferay and Alfresco, I've also tried out Exo and Icecore (built on Liferay) with limited success. I've looked at the JBoss portal a few times also but I just can't get over them basing their forum portlet on phpBB. I know, sounds petty, but phpBB....jesus...I don't want to think about this anymore. Bye.
Oh, and one more thing (damn I'm frustrated). HTTP Basic Auth is NOT a good solution for any feature of any software intended to be used in the Enterprise. (yes, including the Starship Enterprise) Pushing it over SSL only makes transport safer, and doesn't solve the root problem. (and we're just self-signing anyhow, even if we don't like to admit to it) And entering a password into a browser is NOT a good solution, ever. Receiving a password from a web browser is NOT a good solution. This is all the more true on the Starship Enterprise where people are either using potentially weak SSO or making all of their passwords match. The last thing you want are domain passwords in your damn browser cookies. I'm looking at you, REST authentication appologist!
My comments about the Liferay WebDAV support in this whiny article are incorrect. As of Liferay 5.1 WebDAV does seem to work since I replaced a goofy NIC in my server. Please see the article comments for more information.
*************************************
Once again I find myself stuck between two partial solutions that I can't combine to solve what seems to me to be a rather simple problem. Here it is:
1) I need a "portal".
2) I need a document management "portlet".
3) I must access the document management system via CIFS or WebDAV.
4) The system must be somewhat seamless; single sign on at least.
After reviewing some options, I decided that Liferay should be able to meet my requirements. It's a "portal" with lots of "portlets" included, including document management. (notice my sarcastic quotation marks)
DENIED: Liferay's WebDAV doesn't work well enough to be considered well enough.
Ok ok, I've heard a lot of talk on the ol' blogosphere about integrating Alfresco and Liferay. From all the buzz it must be a pretty good solution, especially with all this "web script" stuff on the Alfresco side. Plus Alfresco has some great features that I can take advantage of, and I know that it's CIFS support works because I've used it in the past.
FAIL. Liferay and Alfresco do not mix. Like, not at all. Well, not if you want to use Alfresco 2.1 Community or better. (BTW, the latest Liferay Alfresco Content Portlet 5.0.0.1 uses Alfresco 2.0) Ok ok, I'm being a little too negative. They do mix I'm sure if you write your own authentication filter to make SSO work. No problem!
You see, both Liferay and Alfresco love talking it up about standards support, like JSR-168, but it seems that neither of them pull it off well enough to, you know, INTEROPERATE.
The guy (guys?) at Cignex have a supposed solution using CAS between the two, but Alfresco doesn't jive with CAS as easily as Liferay, and I couldn't get this working reliably for SSO. I even bought this book called "Liferay Portal Enterprise Intranets" by Jonas Yuan in anticipation of the Alfresco integration section. What a complete bust and $60 down the drain. The book isn't necessarily bad, but the devil is in the details and the book is unfortunately devil-free.
Not suprisingly, another solution is to simply purchase Alfresco Enterprise, because it uses a different code base than community. In fact, Alfresco Enterprise 2.2+ will work with Liferay while Alfresco Community 2.9B will not. Open source? Sorta.
Frankly, I'm a little bit suprised to see the world of open source portals and document mangagement systems still in such a ridiculous state. Liferay has more bugs than a volkswagon dealer and both Liferay and Alfresco are stupidly difficult to configure in any interesting way, as all Java-based web applications seem to be. XML NIGHTMARE! Here's a tip: if it takes weeks just to understand how to configure your software, then your software isn't finished. Excessive meta configuration files are not a feature. Most of your users don't give a shit about Java Beans or any other beans for that matter.
Community? Not very good in either camp. Outdated documents, wikis (DIE WIKI DIE) and very poor forum support.
Not to just pick on Liferay and Alfresco, I've also tried out Exo and Icecore (built on Liferay) with limited success. I've looked at the JBoss portal a few times also but I just can't get over them basing their forum portlet on phpBB. I know, sounds petty, but phpBB....jesus...I don't want to think about this anymore. Bye.
Oh, and one more thing (damn I'm frustrated). HTTP Basic Auth is NOT a good solution for any feature of any software intended to be used in the Enterprise. (yes, including the Starship Enterprise) Pushing it over SSL only makes transport safer, and doesn't solve the root problem. (and we're just self-signing anyhow, even if we don't like to admit to it) And entering a password into a browser is NOT a good solution, ever. Receiving a password from a web browser is NOT a good solution. This is all the more true on the Starship Enterprise where people are either using potentially weak SSO or making all of their passwords match. The last thing you want are domain passwords in your damn browser cookies. I'm looking at you, REST authentication appologist!
Thursday, July 10, 2008
Liferay + Alfresco on OpenVZ
This post is similar to my last entry, as it deals with Java apps and resource settings in OpenVZ. Today I tried to fire up Liferay 5 with the Alfresco 2.1 WAR on Tomcat 5.5, and kept getting the following OutOfMemoryError:
This error turned out to be a bit deceptive and had me needlessly fiddling with Java and OpenVZ memory settings for a time. I should have known to check /proc/user_beancounters right away. If I had, I would have noticed that I was hitting the limit on the numproc parameter, which is set to a very low 240. I increased the limit to to 1000 (just to get an idea of how many processes I would need) using the following command:
Anyhow, the OutOfMemoryError was due to the process limit and had nothing to do with memory. Just thought I'd share in case anyone else runs into this and Googles before troubleshooting properly.
INFO: Starting Coyote HTTP/1.1 on http-8080
Jul 10, 2008 7:50:07 PM org.apache.jk.common.ChannelSocket init
INFO: JK: ajp13 listening on /0.0.0.0:8009
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:295)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:433)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:597)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.(ThreadPool.java:648)
at org.apache.tomcat.util.threads.ThreadPool.openThreads(ThreadPool.java:520)
at org.apache.tomcat.util.threads.ThreadPool.start(ThreadPool.java:149)
at org.apache.jk.common.ChannelSocket.init(ChannelSocket.java:436)
at org.apache.jk.server.JkMain.start(JkMain.java:328)
at org.apache.jk.server.JkCoyoteHandler.start(JkCoyoteHandler.java:154)
at org.apache.catalina.connector.Connector.start(Connector.java:1090)
at org.apache.catalina.core.StandardService.start(StandardService.java:457)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:700)
at org.apache.catalina.startup.Catalina.start(Catalina.java:552)
... 6 more
This error turned out to be a bit deceptive and had me needlessly fiddling with Java and OpenVZ memory settings for a time. I should have known to check /proc/user_beancounters right away. If I had, I would have noticed that I was hitting the limit on the numproc parameter, which is set to a very low 240. I increased the limit to to 1000 (just to get an idea of how many processes I would need) using the following command:
Turns out that Liferay with the Alfresco plugin on Tomcat 5.5 using MySQL for everything uses up 248 processes on my machine just to start.
vzctl set 101 --save --numproc 1000:1000
Anyhow, the OutOfMemoryError was due to the process limit and had nothing to do with memory. Just thought I'd share in case anyone else runs into this and Googles before troubleshooting properly.
Tuesday, July 1, 2008
Installing JDK in OpenVZ VPS
I started playing with OpenVZ today because I'm getting terrible performance from QEMU+KQEMU and various Java applications.
I had some trouble installing the JDK inside my VPS, and the problem cost me about an hour so I figured I'd share. I'm running OpenVZ from a pretty bare Ubuntu Hardy server, and the VPS was created from ubuntu-8.04-i386-minimal.tar.gz which I downloaded from the OpenVZ site.
When I'd try to install the Sun JDK or OpenJDK via apt-get, I'd get the following error:
cat /proc/user_beancounters
vzctl set 101 --privvmpages 262144:278528 --save
I had some trouble installing the JDK inside my VPS, and the problem cost me about an hour so I figured I'd share. I'm running OpenVZ from a pretty bare Ubuntu Hardy server, and the VPS was created from ubuntu-8.04-i386-minimal.tar.gz which I downloaded from the OpenVZ site.
When I'd try to install the Sun JDK or OpenJDK via apt-get, I'd get the following error:
Setting up sun-java6-bin (6-06-0ubuntu1) ...Turns out it's just a resource problem. If you're getting this error, check /proc/user_beancounters and see if you're getting any failures:
Aborted
dpkg: error processing sun-java6-bin (--configure):
subprocess post-installation script returned error exit status 134
dpkg: dependency problems prevent configuration of sun-java6-jre:
sun-java6-jre depends on sun-java6-bin (= 6-06-0ubuntu1) | ia32-sun-java6-bin (= 6-06-0ubuntu1); however:
Package sun-java6-bin is not configured yet.
Package ia32-sun-java6-bin is not installed.
dpkg: error processing sun-java6-jre (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of sun-java6-jdk:
sun-java6-jdk depends on sun-java6-bin (= 6-06-0ubuntu1); however:
Package sun-java6-bin is not configured yet.
dpkg: error processing sun-java6-jdk (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
sun-java6-bin
sun-java6-jre
sun-java6-jdk
E: Sub-process /usr/bin/dpkg returned an error code (1)
cat /proc/user_beancounters
101: kmemsize 625164 1720287 11055923 11377049 0As you can see I was having problems with the limit on privvmpages. I doubled the default twice before it worked, resulting in a barrier of 262144 and a limit of 278528. You can change these values like so:
lockedpages 0 0 256 256 0
privvmpages 1173 129807 131072 139264 2
vzctl set 101 --privvmpages 262144:278528 --save
Tuesday, June 24, 2008
Sigh, Goodbye OpenSUSE
So I had to ditch OpenSUSE 10.3. It doesn't come as any surprise, really. I've never been able to break out of the Linux installation loop:
1) Install new distro
2) Work out various hardware kinks
3) Try to configure new distro to my liking
4) Try to use new distro for a few weeks
5) Hit a brick wall on some necessary feature; or distro simply breaks
6) Goto step 1
I had OpenSUSE 10.3 installed on both my Thinkpad R32 laptop, and on a standard desktop machine (Asus Mobo, Athlon XP, Geforce 6200 video). In both cases I ran into a dreaded KDE Freeze Bug. Unfortunately the KFB has like 80HP and does 20+2 (freeze) damage, which is a formidable enemy; even though I'm dumping all my skill points into Linux Tinkering and have several Potions of Diet Pepsi in my inventory.
Screw it. I might put more effort into battling the Freeze Bug, but OpenSUSE 10.3's package manager is absolutely atrocious. There have been so many times over the past month that I've thought about searching for software and decided that it wasn't worth it to wait for the package manager to start up. Even if I had that kind of time I'd still just end up fighting RPM dependency nightmares.
I realize that OpenSUSE 11 is just out and has a substantially better package management system. Unfortunately, they said the same thing about 10.3. If the package manager in 10.3 offered "dramatically improv[ed] speed," then I can't imagine how terrible it was previously. And even if package management is twice as fast in version 11, it would still be too damn slow.
So I installed Ubuntu 8.04 on both machines. So far so good, although I ran into the same old nVidia driver problem on the desktop and the laptop won't shut down properly with my PCMCIA wireless card installed. The nVidia driver issue was easy enough to fix (again), and when I figure out where to tell Ubuntu to eject the PCMCIA card on shutdown I'll update that blog post.
It's nice to be back on Ubuntu even though it uses the ever-ugly Gnome desktop and lacks a proper control panel. The Debian package management is really where it's at. I've decided not to try anything fancy with the laptop either - defaults all the way. We'll see how long this all lasts.
Oh, here's a cute one: when I successfully wake my Thinkpad from hibernation, the Hardy Heron informs me that the laptop was unable to hibernate. Thank goodness for the guy who put the "don't tell me this again" checkbox in that dialog.
1) Install new distro
2) Work out various hardware kinks
3) Try to configure new distro to my liking
4) Try to use new distro for a few weeks
5) Hit a brick wall on some necessary feature; or distro simply breaks
6) Goto step 1
I had OpenSUSE 10.3 installed on both my Thinkpad R32 laptop, and on a standard desktop machine (Asus Mobo, Athlon XP, Geforce 6200 video). In both cases I ran into a dreaded KDE Freeze Bug. Unfortunately the KFB has like 80HP and does 20+2 (freeze) damage, which is a formidable enemy; even though I'm dumping all my skill points into Linux Tinkering and have several Potions of Diet Pepsi in my inventory.
Screw it. I might put more effort into battling the Freeze Bug, but OpenSUSE 10.3's package manager is absolutely atrocious. There have been so many times over the past month that I've thought about searching for software and decided that it wasn't worth it to wait for the package manager to start up. Even if I had that kind of time I'd still just end up fighting RPM dependency nightmares.
I realize that OpenSUSE 11 is just out and has a substantially better package management system. Unfortunately, they said the same thing about 10.3. If the package manager in 10.3 offered "dramatically improv[ed] speed," then I can't imagine how terrible it was previously. And even if package management is twice as fast in version 11, it would still be too damn slow.
So I installed Ubuntu 8.04 on both machines. So far so good, although I ran into the same old nVidia driver problem on the desktop and the laptop won't shut down properly with my PCMCIA wireless card installed. The nVidia driver issue was easy enough to fix (again), and when I figure out where to tell Ubuntu to eject the PCMCIA card on shutdown I'll update that blog post.
It's nice to be back on Ubuntu even though it uses the ever-ugly Gnome desktop and lacks a proper control panel. The Debian package management is really where it's at. I've decided not to try anything fancy with the laptop either - defaults all the way. We'll see how long this all lasts.
Oh, here's a cute one: when I successfully wake my Thinkpad from hibernation, the Hardy Heron informs me that the laptop was unable to hibernate. Thank goodness for the guy who put the "don't tell me this again" checkbox in that dialog.
Thursday, May 1, 2008
Nagios with NSClient++ Character Flaws
Arg. It can be frustrating to pass special characters to check_nt arguments!
First, the dreaded ampersand (&):
Unfortunately, it appears as though the ampersand is the field delimiter used by NSClient++, so passing an ampersand to check_nt is absolutely not going to work. Take a look at the following code snippit from check_nt.c :
As you can see, the ampersand is hardwired into the request to the NSClient++ server, so any fix will require changes to both the check_nt plugin and NSClient++.
There is no workaround for this, except to avoid using the ampersand (escaping the ampersand with a backslash ( \ ) does not work). If, for example, you are trying to check on the status of the "Backup Exec Device & Media Service", use the service name instead of the display name -- NSClient++ can use either. In this case, the service name is "BackupExecDeviceMediaService", which you can find in the service properties.
P.S. - if you're unfortunate enough to be using Backup Exec, I feel for you.
Next, the dollar sign ($):
The dollar sign is a goofy one too, and can't be escaped with the backslash character ( \ ). Instead, you have to double it and put quotes around it (like so: "$$"). Neat, eh?
An example: let's say that you're trying to monitor the service MSSQL$BKUPEXEC. Unfortunately, this is both the display name, and the service name, so the last trick we used won't work. No worries, though, thanks to our friend the double-dollar-sign-enclosed-in-quotes. Your check_command will look like this:
So awesome! Yay for annoying things!
Note: you might think you're clever and use single quotes instead of double around the entire service name. Unfortunately, that does not work reliably. It does seem to work, however, if you're only checking on one service name in the command. Anyhow, don't bother.
Next up, the backslash ( \ ):
This one is pretty easy, you just double it up ( like so: \\ ). Thus checking on a performance counter will look something like this:
Phew, that counter name is a mouthfull, which is actually why I chose it. Don't try to manually type in your performance counters; copy and paste them. From a terminal to the Windows machine you're monitoring, open up Performance Monitor. Add the counter you're looking for to the graph, select the counter from the legend at the bottom of the window, and then click on the Copy Properties button (it's one of the buttons at the top of the graph). Now open up notepad or your favorite text editor and paste the performance counter data into it. Somewhere in there you should see a .path attribute that contains the entire counter reference which you can copy and paste into the specific Nagios configuration file we're working with (remove the server name and double all of the back slashes). Thankfully we can copy and paste from a terminal in Windows to local windows, if we're running Windows.
Note: I believe that much of the confusion over passing arguments to the check_nt command in Nagios has to do with this double back slash which looks like we're escaping the back slash. We're not...well, not really. Don't expect to simply use regular Bash shell notation in your arguments. Single quotes don't necessarily behave the way you'd like. Escaping doesn't work the way you might expect it to. Just don't bother trying to out-think the system, follow its conventions, make no assumptions, and you'll be fine.
First, the dreaded ampersand (&):
Unfortunately, it appears as though the ampersand is the field delimiter used by NSClient++, so passing an ampersand to check_nt is absolutely not going to work. Take a look at the following code snippit from check_nt.c :
249 case CHECK_PROCSTATE:
250
251 if (value_list==NULL)
252 output_message = strdup (_("No service/process specified"));
253 else {
254 preparelist(value_list); /* replace , between services with & to send the request */
255 asprintf(&send_buffer,"%s&%u&%s&%s", req_password,(vars_to_check==CHECK_SERVICESTATE)?5:6,
256 (show_all==TRUE) ? "ShowAll" : "ShowFail",value_list);
257 fetch_data (server_address, server_port, send_buffer);
258 return_code=atoi(strtok(recv_buffer,"&"));
259 temp_string=strtok(NULL,"&");
260 output_message = strdup (temp_string);
261 }
262 break620 void preparelist(char *string) {
621 /* Replace all , with & which is the delimiter for the request */
622 int i;
623
624 for (i = 0; (size_t)i < strlen(string); i++)
625 if (string[i] == ',') {
626 string[i]='&';
627 }
628 }
As you can see, the ampersand is hardwired into the request to the NSClient++ server, so any fix will require changes to both the check_nt plugin and NSClient++.
There is no workaround for this, except to avoid using the ampersand (escaping the ampersand with a backslash ( \ ) does not work). If, for example, you are trying to check on the status of the "Backup Exec Device & Media Service", use the service name instead of the display name -- NSClient++ can use either. In this case, the service name is "BackupExecDeviceMediaService", which you can find in the service properties.
P.S. - if you're unfortunate enough to be using Backup Exec, I feel for you.
Next, the dollar sign ($):
The dollar sign is a goofy one too, and can't be escaped with the backslash character ( \ ). Instead, you have to double it and put quotes around it (like so: "$$"). Neat, eh?
An example: let's say that you're trying to monitor the service MSSQL$BKUPEXEC. Unfortunately, this is both the display name, and the service name, so the last trick we used won't work. No worries, though, thanks to our friend the double-dollar-sign-enclosed-in-quotes. Your check_command will look like this:
check_command check_nt!SERVICESTATE!-l "MSSQL"$$"BKUPEXEC"
So awesome! Yay for annoying things!
Note: you might think you're clever and use single quotes instead of double around the entire service name. Unfortunately, that does not work reliably. It does seem to work, however, if you're only checking on one service name in the command. Anyhow, don't bother.
Next up, the backslash ( \ ):
This one is pretty easy, you just double it up ( like so: \\ ). Thus checking on a performance counter will look something like this:
check_command check_nt!COUNTER!-l "\\Network Interface(Intel[R] 82546EB Based Dual Port Network Connection - Packet Scheduler Miniport)\\Bytes Total/sec"
Phew, that counter name is a mouthfull, which is actually why I chose it. Don't try to manually type in your performance counters; copy and paste them. From a terminal to the Windows machine you're monitoring, open up Performance Monitor. Add the counter you're looking for to the graph, select the counter from the legend at the bottom of the window, and then click on the Copy Properties button (it's one of the buttons at the top of the graph). Now open up notepad or your favorite text editor and paste the performance counter data into it. Somewhere in there you should see a .path attribute that contains the entire counter reference which you can copy and paste into the specific Nagios configuration file we're working with (remove the server name and double all of the back slashes). Thankfully we can copy and paste from a terminal in Windows to local windows, if we're running Windows.
Note: I believe that much of the confusion over passing arguments to the check_nt command in Nagios has to do with this double back slash which looks like we're escaping the back slash. We're not...well, not really. Don't expect to simply use regular Bash shell notation in your arguments. Single quotes don't necessarily behave the way you'd like. Escaping doesn't work the way you might expect it to. Just don't bother trying to out-think the system, follow its conventions, make no assumptions, and you'll be fine.