Wednesday, 5 August 2009

Terminal headaches

We have been trying to implement some new software for the CRM – the product has been used by one of our sites for some time, but not on the other sites. They had tried to use it before, but it’s not designed to be used across a WAN, so it had been set-up as multiple databases and when they started getting issues, they just stopped using the product.

The company concerned have issued a new version and our sales people have seen it and really like it. The vendor has produced a modified client GUI to run in a web browser – the idea is that those users on the remote sites would make use of that and so we could run a single database for all sites.

Well, that WAS the idea – the software runs OK locally, but when it was running through the browser, it was not as fast. Although it was usable, there was a definite speed issue, and we were worried that the users on the other site might not be convinced enough to use the product if the speed was poor.

It then occurred to me – the database was installed on the server and we also had a copy of the client software installed on the server so that we could test it was running as it was being set-up. I did a quick RDP to a server on the other site, the from there did another RDP back to the server on our site. The speed of operation was good – as far as I could tell, the speed was the same as if we were running it directly at this site.

So I set-up some shortcuts and emailed them to the users at the remote site, and then talked them through how to save and use the shortcut. They agreed that this worked well and they were really happy with the speed of operation. But then we hit a snag – only 2 users at a time. As we are talking about having some 20 remote users, then there is clearly a bit of a problem.

Now my predecessor had bought volume licences for a lot of software which included some terminal server licences, but unfortunately, none of the paperwork specified what was what. I found the paperwork ages ago and set-up a profile on eOpen to manage all of the various items. https://eopen.microsoft.com/EN/default.asp - this is a great resource and I suggest that you check it out if you don’t already use it. It allows you to see what the various bits of paper refer to and it gives you details on date of purchase, vendor, type of licence, quantity etc.

However, when I double checked, the Terminal Services licence server had been setup and the licences applied – so that wasn’t the problem. I then searched through the various bits and pieces and subesquently realised where it was all going wrong. The server that the software was installed on was set to use Remote Desktop as the licensing mode, not the correct Terminal Services mode. A quick couple of clicks and problem solved.

So now the staff at the remote site can all connect to the server and all use the CRM software. It seems to run just as quickly when half a dozen of them are using it – so they are all happy!

Monday, 27 July 2009

Windows 7

Like a lot of people, we’ve been keeping an eye on the information coming out about the next version of Windows. We received a copy of the RC 1 candidate for Windows 7 on Monday (thanks Georgina) – we have a PC with a copy of Vista Business Ultimate that we use for testing purposes (a Dell Optiplex 210 with a dual core 1.8 GHz and 1 GB ram), so thought we would partition the disk and do a dual boot so that we could do a direct comparison.

The installation went quite well – some of the usual types of screens for the initial installation, but not as many as we would normally see for older OS. The actual process start to finish was a little over 40 minutes. We didn’t join the PC to the domain although we left it plugged in and it picked up on the required settings, so we were able to activate it straight away. We then joined it to the domain a couple of days later - no hassles at all.

I’ve also run another copy on a second machine – an HP dx2450 with a dual core 2.3 and 2 GB RAM. This one installed in just over 25 minutes. Again, it was a very straight forward installation, with only a few screens to configure and absolutely no issues at all.

Initial reaction to it was that it does look a lot like Vista – same screens, same gadget bar etc. However, within a few minutes, it was noticeable that it was faster that the equivalent Vista installation probably about 40 – 50% faster. The Start button, Taskbar items and other shortcuts all seem to work much quicker as well – no delays between clicking the button and the app starting to open, which was a bit of a major gripe with the Vista installation.

We added our AV product (Nod32 by Eset) – it worked straight away, without any issues at all. We then added our automatic patching tool (Shavlik) – as it’s an RC product, we didn’t expect it to work. However, it did actually pick up on the OS, although there were no patches for it at this stage. The second PC was left as a stand alone system and AVG free (8.5) was installed as the AV product. The PC was connected to the Internet to allow it to run the automated patching – again very quick, very straight forward.

At this stage, we are still testing different apps on the machines – our ERP software, some CAD software, and various applications which we use. Not one of them has had an issue with running – the UAC threw up its warnings, but I don’t consider that a problem as that is exactly what it should do.

We’ve left the test machines in an office for people to check out – so far only a few have taken the time to do this. However, of those that have used it, not one has said that they don’t like it. All comments have been very positive and it seems that a number of people are now very keen to get the product for themselves and we may well consider installing it early next year with our next hardware refresh. All in all, it seems that Windows 7 is just what the doctor ordered.

Wednesday, 15 July 2009

Security 101

I don’t pretend to be a security guru, but over the last few years I have had some specialist training in this area. I’ve also read a number of books on various security topics and have developed a bit of an interest in the subject. As a result, I tend to look at things a bit differently now – and sometime what I see really gets me wound up.

This morning, I received a telephone call from someone that said he worked for the credit card fraud section of one of the main UK banks. Our company does have an account with them (we actually use several banks) and we get company credit cards through this particular bank. These are used for a number of things – minor expenses, making travel arrangements, increasingly to buy things on-line. It makes life easier, and the credit control staff in our accounts department can track the charges much more easily than though petty cash arrangements.

The person that phoned explained that he wanted to query a particular payment – not a problem. But then he said that he needed to go through some security checks to make sure that I am the right person to talk to. He asked for the card number, my date of birth, account password plus some other items – effectively everything that a crook would be able to use to pretend to be me. At that point I refused point blank – he has phoned me, and I have no way of knowing if he is in fact anything to do with the bank.

I tried to explain this to him – but clearly he was reading from a script and couldn’t deviate from the process. So I insisted that I wouldn’t discuss anything further and hung up. I then phoned their helpline (the number was on the back of the card) and was put into an automated system. Eventually, I got through a nice young lady who explained that she couldn’t put me through to that department; they only work via outgoing calls and will not accept an incoming calls “for security reasons”.

As it happens, she was able to check the required details and I was able to confirm that the transaction was OK. But I have to say that there is something fundamentally wrong with the way that this bank are working. I tried to get put through to someone to discuss this – they refused point blank. In fact it appears that the only way I can register my concerns is in writing – a letter is going to go off to them tonight and I’ll update this blog to let you know what they say.

To indicate why I’m so uptight about this, I should explain that a while ago a I bought a copy of the book “The art of deception” by Kevin Mitnick. I was a bit ambivalent about this to begin with, as I don’t think it is right to reward someone for bad behaviour; but I wanted to understand how he achieved the various expolits that he got away with. Although some of the descriptions of his activities are now out of date or only relate to things in the US, the majority of the principles are actually very relevant today.

In the book, he described how he managed to obtain information by talking to several people, using one piece of information obtained from one person to persuade someone else to reveal another and so on until he got just what he needed. In this way, he gained access to a lot of really sensitive information, and if he had wanted could have caused a lot of trouble. What is so disturbing is how easy he found it all.

In my case, I refused to pass over the information and then took steps to verify the person was who he said he was – but it appears that the bank don’t want to work that way and in fact try to prevent a fairly sensible set of precautions. Worse they are propagating a method of verification that is open to abuse, and it is likely that if the average person sees that the bank do it a specific way, they will assume it is OK and not question someone else that telephones them, potentially leaving them open for a security breach.

Social engineering is a fact not a theory – and that is why so many people still fall victim to scams and the quantity and quality of spam we get is testament to the amount of money that is involved, and the number of people that regularly fall prey to these crooks. The risks are well known and I would expect those people that are involved in areas of security to understand this. If they don’t follow good procedure, how are the rest of us going to enforce it at our level?

Friday, 3 July 2009

Hot, hot, hot...

I booked to take a week off of work last week – no plans to go anywhere, but just wanted a bit of a break. It was a glorious week, with lots of sun, but not too hot, and I managed to catch up on some outstanding jobs at home, such as painting the windows. I also had the chance to sit around and just relax with a glass of wine or two….

So back to work on Monday this week. I thought that I would get an early start as there are a number of projects on the go and I wanted to get a few things out of the way. When I arrived, there was note on the door – the inventory clerk had had problems getting on the system, so had left a note for us to investigate.

When I checked the server room, everything was off and the room was absolutely boiling – we normally run at around 22-24 degrees C as we find that’s a nice temperature to work in, the servers are OK with that and it uses less power to cool the place down. I quickly checked and everything had shut down including the air conditioner which wouldn’t even re-start.

I looked at the UPS and that was showing power going in, but nothing coming out. I looked but couldn’t see a problem so grabbed a couple of power extension leads from our office and ran them around so that we could get a couple of systems running. Priority number 1 was the DHCP / DNS server so that we could get network services and that was the first one running. Next one was email – no problem there, it started up fairly quickly. But with the room so hot, I had to find a way to get some air movement. Even with all the windows and doors open, the room was still close to 40 degrees.

I pinched some fans from the HR office as a quickfix, and after about 20 minutes the maintenance manager came in. He did a quick check on the air con unit and discovered that the power breaker in the mains supply in the factory had tripped out – he reset this, but when the unit started up, it wasn’t cooling anything down. He contacted the service company who sent an engineer down later.

With the rest of my staff in, we started moving a couple of the servers – we have small backup room at the other end of the building so were able to put a couple of them down there as a temporary measure. By about 9:00 am we had most of the system running so that people could get on with the daily work.

When the engineer from the aircon company turned up, he identified that the compressor had failed and needed to be replaced. It took a couple of days to get this, only for him to then discover that anpother part had failed causing all the refridgerant gas to leak out. This is what caused the aircon to fail – and as a result everything over heated.

We checked the UPS settings as it is supposed to send an alert for various events, and it turned out that every event was ticked except the one for temperature. Doh! Basically the device had gone up to 60 degrees C and then just shut everything down. In addition, a switch on the device had tripped preventing any outgoing power.

So now we are almost back to what passes for normal – we have to make time to come in one Saturday to put everything back in place as it takes longer to build a rack up than it does to strip it down. But the aircon is cooling away nicely and hopefully, now we’ve ticked the box, it will warn us of any similar event in future.

Monday, 29 June 2009

Castle walls

Just over a week ago, Microsoft held their Technet Virtual Conference – I found it a really useful event and there were a lot of interesting features. If you missed it then you might want to know that the material is still available from their main website.

During the day, items were split between technical and management; the first item in the management section was a recorded talk by Miha Kralj, one of their senior architects. He had a lot to say on the topic of where IT is likely to go over the next decade and it was delivered in a straight forward, humorous fashion. I found that I agreed with much of what he said – but there were a couple of items where I think he was a little bit out.

He talked about people in the workplace – how they fall into certain categories, Baby Boomers born the 20 years after WWII (which includes me!), Generation X, Generation Y and the latest additions to the work place, the Digital Natives. He stated that this latest generation are much more attuned to using computing devices and companies need to take this into account when planning for the future.

He argued that the Digital Natives are used to making use of newer technologies such as Instant Messaging, social networking sites such as Facebook, video sites such as YouTube or photo sharing sites like Flickr, and will expect to be able make use of these as part of their normal work routine. They are therefore unlikely to be happy conforming to corporate rules preventing the use of these products, and so companies need to “tear down the walls” to their networks.

When I heard this, my immediate reaction was one of horror – like many others, I have had to deal with issues such as virus or spyware infection caused by a user opening an email or downloading a file that is actually a piece of malware. The old saying “an ounce of prevention is better than a pound of cure” is very relevant for those of us at the front end.

I understand the value of making use of these products, and in fact we are looking at introducing some newer methods of communication to improve the way that people work. But I also am very concerned about the topic of security. The reality is that the majority of users are still very naïve about safety measures – those of us entrusted with system administration cannot afford to rely on the users to keep themselves safe, and we have to make sure that they are not put in a position where they can compromise the security of the network.

Unfortunately, the new Digital Natives may well know how to do things, but are not yet savvy enough to know if they should; or more importantly, why they should not do something (and for that matter, most other users are just as bad). We may be able to allow some windows into our secure networks, but to remove the protection completely would be a very foolish thing indeed.

Sunday, 21 June 2009

Technet Virtual Conference June 09

One of the problems for many people working in IT is the tendency to work in small groups, possibly even alone – there are many more of us working in teams of 5 or less than there are that work in larger groups. Unfortunately, this can then cause us to develop a “silo” attitude to working. It’s then very easy to become blinkered in our attitudes and the way that we work.

For that reason, I try to get out of the business occasionally to attend various events, and I encourage my staff to do the same so that we can see what else is going on in the world. In the last few years we’ve been to various seminars that were on developing technology that we thought might be of use to us that we needed to learn more about, and of course we always try to get along to the supplier events (just a hint to the suppliers – guys, forget all the crappy junk that you hand out, it’s t-shirts we want!)

Over the years, I’ve seen the Tech-Ed events and have wanted to go; but the company won’t pay and I can’t justify stumping up the cash myself. So when it was announced that the Microsoft Technet team were planning to hold a “virtual” conference, I was intrigued. I work quite a bit with video-conferencing and audio-conferencing – and as part of my studies through the OU, I’m used to collaborative online work with forums, wikis and blogs. For me, making it an online experience makes a lot of sense – instead of spending money on event facilities, the resources can go into the content.

If you didn’t get the chance to attend the event, then most of the material is still available on-line at: http://vepexp.microsoft.com/govirtual
and I understand that this should remain available at this location until September 09 – I imagine that it will be available after that, but filed away somewhere else. I would suggest that there is something for everyone – plenty of useful material for the techie, and for the managers alike.

Now many people can get cynical about these sorts of things – they envisage it purely as a sales vehicle. I understand those concerns and yes, it could be argued that Microsoft is trying to sell us on the idea of buying more of their products. Well Duh! they are a commercial enterprise – of course they want to sell things. However, the event was much more about the ideas behind the use of the technology and the way that it can be used.

We are currently doing some evaluation work with Windows 7 and there were a couple of items during the event that discussed new features and the way that Microsoft sees it being deployed. These were very useful – they highlighted bits that we hadn’t actually seen and we will be making a point of checking them out at some stage. There was also information about some of the additional features in Server 2008 R2 that we want to look at – and there was a session on Data Protection Manager Server 2007 which my staff and I think is one of the most valuable / useful products we have ever bought.

A few minor criticisms – I had a couple of issues with some of the material, probably because I was watching on a laptop whilst doing some other work, so on occasion the videos were a bit jumpy and some of the lip synching was slightly off; the presentation slides could also be out of step with the talk. I had a problem with one of the sessions; it froze part way through and wouldn’t re-start. (OK, I need to buy more memory for my laptop, I only have 512M.) However, I went back to it the following day and watched all the way through. There was also an issue with the chat function – apparently even the Technet staff had this problem.

On the positive side, I would highlight one particular session that stood out for me – a look at the future in a session by Miha Kralj. Really thought provoking and delivered with sense of humour. I would have to say that I do actually take issue with some of his points and may even discuss it in more detail in subsequent blog posts. But don’t take my word for, go the site and hear what the man has to say for yourselves.

All in all, 2 thumbs up for a very useful resource produced by the guys and girls at Technet – I think that they all deserve a big pat on the back for a great job well done. I’m told that around 4,000 people took part on the day and I really hope that many more go back to the site to check out the resources in the next few months. I think that they also plan to hold more events like this in future and I for one would definitely be taking part if possible.

Tuesday, 9 June 2009

You don't want to do it like that .....

A few weeks ago, I was invited to go to another company. Whilst there, I had the chance to talk to a couple of their IT people about some of the issues that they face.

One of the first things that I discovered was that they have a real problem with their Exchange Server – it regularly stops working because the database un mounts. I was interested to know why, because we have only had that happen to us once in 4 years; and that was just after we had migrated from Exchange 5.5 to Exchange 2003.

It appears that their mailbox database is 85 GB in size; quite a bit over the 75 GB that is referred to in all the material on Exchange. All of the stuff that I found indicates that this will cause regular un mounting of the database due to the limits of the product (Standard edition).

I was a bit surprised at the size of their mail store – ours is just over 16 GB in size and we have about the same number of users. I told them that we operate a rigid set of limits – 200 MB per user for their mailbox and no attachments over 5 MB in size. They were astonished that we could get away with that; they told me that their users would be very unhappy at such limits. But as I asked them, are the users happy that the email system goes offline several times a week?

We’ve found that if you allow certain people more space, they just push it to the limit and if you then give them more, they will just save more rubbish. We’ve had people delete files, then leave these in the deleted folder – just in case they want to refer to the mail. We’ve had people keep emails from 10 years ago – in many cases the sender or recipient concerned are no longer around. Unfortunately, our experience shows that users will not manage their mailboxes unless you force them to.

We also found that people were just emailing files without even thinking about what they were doing; no attempt to compress or even check if it was appropriate to email the files. The worst case was someone from a sister company sending in a .pdf file of 80MB – to make it worse, the recipient was the CEO and he only wanted the one page from the document, not the whole file. We also regularly get people sending large files to multiple recipients – a few weeks ago, someone tried to email a software attachment of 8 MB to 20 people.

So we enforce the limits with absolute rigidity, and for the most part our users are used to this. We do allow them to archive off some mail to data files that are stored separately on a server – and these are then backed up as part of our normal backup routine. As a result, we get very few problems – this would indicate that our way of working is efficient and therefore other people would be wise to follow what we do.

However, what works for us most definitely would not work for other people. I’m aware that there are people that need to keep emails for much longer and are not allowed to delete anything as they have to keep records of all contacts for regulatory reasons. There is a tendency for IT people to assume that what they do will work for everyone – a bit like the Harry Enfield character who insists “You don’t what to do it like that, you want to do ….”

Unfortunately, in many cases, the person so insistent that he knows the best way to do something is unaware of all the facts. I had exactly that a few years ago; someone insisted that I could fix a problem by doing a particular thing to the TCP/IP settings. When I pointed out that we were using IPX/SPX, it meant nothing to him – he had never worked with NetWare and didn’t understand the difference between the two networking protocols.

Despite this, I am of the view that we could do a lot more in the industry to pass information on good practice around between people. In our department, we regularly find hints and tips that we like to test out in case there is something that helps make our job easier or prevents problems from occurring. Sometimes they work, sometimes they don’t – but it’s all good.