Wednesday, 15 December 2010

BCS - Retro computing

Most IT staff work in fairly small groups; even in the larger companies, teams break down into groups of just a few people. As a result, it's easy for people to develop a "silo" mentality, and forget that there is a larger world out there.

For that reason, I like to try to get to various events where there is an opportunity to speak to others within the profession. It's really useful to be able to share ideas, talk about common problems, to know that there are other people that have exactly the same pressures on them and all too often, the same feeling that their work is not appreciated.

The BCS in the South West organise a number of events throughout the year, although there tend to be more during the Winter and Spring terms. During the Summer months, most of the organisers are busy with educational exam systems as they tend to be in academia.

The latest event at the University of Plymouth was a talk on "Retro computing"; a look back at some of the hardware and software systems of the last half century. It was quite amazing to recall the changes that have occurred over that time, to see once again the boxes that seemed so modern and powerful at the time.

They had an amount of older equipment on display, items that have been picked up over the years and kept to be part of a "museum of computing". People had the opportunity to use a few of these old devices; it was quite interesting to be able to once again play a game of Lemmings on the old Amiga.

However, it wasn't just about games; they had some emulation software there that showed how some of the older systems used to run and what kind of business systems were running on them. As someone who had once had the opportunity to create a program from scratch, by designing the flow chart then creating the commands on a series of large punch cards to be processed on the main frame at County Hall, I had a strange sense of nostalgia.

For some of those there, most of the hardware was beyond their recall; several students were actually younger than some of the exhibits, which is quite a scary thought! It just makes me wonder if my nice new shiny HP laptop will seem as ancient and irrelevant in another 20 years.

The BCS South West are also starting a new web site to act as a repository for some information on older computing. The site is there but nothing is available just yet (http://retrocomputing.org). I'm told that they intend to slowly build this up with the help of a few volunteers in the months to come.

In all, it was a really interesting evening, with a lot to see and do. It was also amusing to see who were the highest scorers in "Crazy Taxi"! Clearly there were a lot of the people with grey in their hair that had spent just as much time playing games as some of the younger generation.

Saturday, 11 December 2010

Sec-1 Penetration Workshop

On Friday 10th I went to a workshop event held in Bristol. It was organised by Sec-1 a specialist security firm http://www.sec-1.com/ - note the correct address, if you get it wrong you end up at a completely different type of business!

Obviously, these events are to promote the company and their services; however, it wasn't just a massive sales pitch. The main purpose was to offer people advice about maintaining good security practice by illustrating just how easy it is to break into systems and highlighting the reasons why.

The speaker was Gary O'Leary-Steele and he spoke with passion, conviction and great deal of knowledge. He indicated that they have carried out many investigation tests over the years, and in most cases they could use the same report over and again, but just change the name of the organisation. This is particularly the case in the 150 NHS trusts they have investigated, but is also often true of many private sector businesses.

He stated that in many cases, people have failed to adequately install patches which have been issued for specific problems, often long after the issue has been identified. As it happens, I did a quick search on MS06-040 & MS08-067, the two main culprits and the autocomplete worked in each case after just the first 4 characters, the problem is so well known.

He went on to discuss some of the most common problems and illustrated how they could be used to access systems. He also went on to demonstrate how easy it can be to identify vulnerable systems, get access to accounts with innappropriate levels of security permission, crack passwords and elevate permissions. In most cases, the team of testers expect to get access within 30 mins - if they take longer than an hour, the others tease them unmercifully!

Most of the tools that they use are available quite freely on the Internet. In some cases, they do use items that have been commercially written and there is a small charge, but generally those ones are for the real high end stuff. Each has their own favourites in much the way that people do with most other kinds of software.

Whilst going through the potential problems, Gary also indicated some of the possible solutions, often by using the software tools to confirm the problem, then implementing suitable practice or policy to ensure that something is done to minimise the problem or reduce the impact.

It should also be identified that many of the exploits that were identified were in Microsoft OS or software; but the speaker also very carefully highlighted that issues are just as prevalent in other software products. Mac, Linux, Adobe etc, were all shown to be just as insecure. In many cases, this was due to installation or configuration, but equally there were many flaws straight out of the box.

I'm not a security specialist, although I have had some training in this area. I also enjoy some of the work involved, although it has to be said I don't think that I have the necessary skills to make this my specialism. However, I think that I know enough to be able to state that there are a lot of people that suffer with "delusions of adequacy"; they think that because they use a particular product, or do a specific thing, that makes them invulnerable. Often, they are so wrong that it is difficult to know how to take them seriously in anything.

I'm going to say that it was a great day, a really useful workshop and I was very impressed by the whole event. If they organise any more (and I'm told they certainly hope to) I would very strongly suggest that you grab the opportunity to get along and take advantage of the information and advice that they are willing to hand out free of charge.

Sunday, 5 December 2010

V Two

Following on from last week's blog.

So we bought the hardware, and after it had been delivered, installed everything in the rack, and sat back to start planning the installation. I started up one of the host machines to get a look at the POST and boot processes. To my surprise, an operating system had already been installed - and it was Windows Server 2008 R2 Datacenter. We had purchased the licences for this, but hadn't expected that they would pre-install it.

Well no problem, just have to install the VMWare ESXi. I had a version of the ESXi software, but it was an older version, so first I had to download an updated version of the software which was an .iso image, then create an install disk. Having created the disk, I was then able to do the install. I was really quite surprised; it went through very quickly. Very little to see, just a few linux type screens showing the progress of the install. But after just under 15 minutes, it was all done.

So obviously, it also made sense to do the other two hosts at the same time. Away I went and the second machine was done in much the same time, everything complete with no issues. I then started the third machine, and decided to go for a quick cup of tea as there seemd to be no point in me hanging around watching a series of dots advancing across the screen.

But when I got back, I had a bit of a shock; the process had stalled part way through. The equipment didn't seem to respond to any keystrokes, so I took the disk out to check if there was a fault, but it didn't seem so. I tried to start the install again, and unfortunately, once again it stalled. A third attempt fared no better, so I decided to take a break and look at the vSphere client install whilst I thought about what could be the issue.

I already had installed a copy of the latest version of vSphere client on my laptop for our test a short while ago, and just had to change the logon details. It connected to the host machines without any issues and I could play around with the various bits. I even did a quick install of a guest Operating System to create my first Virtual Machine. Everything looked really good.

However, I then noticed that there seemed to be something odd about the disk allocation on the datastore on the server. There were several partitions, none of which I had created. Worse, it seemed that several of these were unusable by either the VMware or by the guest OS. Having given it some thought, it seemed to me that when the ESXi software was installed, it didn't re-partition the disk in the way that might be expected, and part of the disk would never be available for use, which might be an issue.

At that point, it seemed appropriate that I should go back over the ESXi software install. I did this, checking the process, and at no point did it actually indicate that there was an option to manage the partition. In the end, I simply put the Windows disk in, then used the install routine to start up, and delete all existing partitions. After that, I ran through the ESXi install, and this time, it made all of the disk available for use. I then decided that I would do the same on the others, and the second machine completed without any issues.

The third machine also allowed me to delete the partitions OK and there seemed to be be no reason why the ESXi software shouldn't install. But still it would only go so far, then it stalled everytime. I went through this a couple of times, before going back to my desk to give it some more thought. And at that point, I discovered the reason why, and it was so frustratingly simple, I am almost embarassed to tell you what it was.

We use a very clearly structured IP address range within our network; servers get a static address in one subnet, and all addresses assigned via DHCP are in a slight different subnet. The address that I had input as part of the install routine was an address within the server range and one that had been specifically reserved for the virtualised platform.

But somehow, the address allocated for the third machine had also been given to a secondary network card on an old server. Someone had added a cable to the NIC and then plugged it into a network point. The install routine had failed because it detected that the address I tried to give it was already in use! Once I sorted out the superfluous NIC, the install routine went through without any more issues.

At this point I had 3 host machines, all installed and a connection to each tested with the vSphere client software. A good start and I felt that I was starting to understand VMware. I still had a few other things to go over, but I was feeling really quite positive about the various processes and was looking forward to getting on with it.

But the next step will have to wait for another day 8-)

Friday, 26 November 2010

V for Virtual

For some time now, we have been looking at a project to implement virtualisation. I decided that this would make for some interesting blog entries, and I thought that I might focus on this for a while.

First of all, I suppose that I should go right back to the beginning to explain some of the reasons behind the decision. When I first joined the company, the servers were mostly tower models, that were stood on a table in a small room. These devices had limited processing power, low memory and disk storage even by the standards of the day, and were not really up to the task required of them. It should be said that they were most definitely not cheap, but certainly could not be described as being good value for money.

It was identified that we needed to buy some newer machines to replace this old equipment and as a matter of some priority in order to provide urgently needed resources. As part of the project, it was agreed that we would move to rack mounted equipment; this made far better use of the available floor space, we could get a lot more in the same area. The equipment was not totally top of the range, but was very good quality, a good specification and thanks to some quite keen negotiation (though I say it myself) was pretty good value for the money.

This made a huge difference to operations. Within a short time, staff could see significant improvements in speed of operation, we had much better storage facilities, and it was all much more flexible. This all helped demonstrate that the investment was appropriate; and I was also able to confirm some of the benefits using some standard metrics.

But that was some 5 years ago. That same equipment is still functioning, and thanks to some upgrades is still providing a good level of service. However, it has been identified that across the estate, much of the processing power is underutilised. Although some machines made full use of their memory, more than half do not. We have a couple of servers with disks getting quite full, but the rest are using less than a quarter of the available space. The most obvious excess is in the network cards; generally, they are using less than 5% of the available capacity.

It was also identified that the specific servers were manufactured before the newer energy saving devices now available; they use quite a lot of electric power, both to operate and to cool. We ran some tests and found that they would operate just as well at a warmer temperature than had previously been used, and this helped to reduce the need for cooling, so it did save some electricity, but we felt that it should be possible to do better.

Of course, it was also identified that with equipment getting 5 years old, there was an increasing chance that we would see some hardware failure. This was the main concern for me; it seems foolish to be miserly with spending on hardware, when a failure could cause huge losses to the business due to loss of data or operational capacity.

After identifying the need for replacement equipment, we started to look at newer versions of the same hardware; this had a number of green options for power saving, but I was still concerned that we would be paying for extra capacity that never got used, even allowing for growth within the business.

Like a lot of people, I'd heard about virtualisation, but wasn't sure if it would really work for us. I was offered the chance to see some Dell kit in action, along with the Equalogic SAN units. These were really impressive, and gave a lot of options. I also compared these to some HP hardware with StorageWorks; these looked a little better if also a bit more expensive.

The next step was to consider what virtualisation software to use. I had some spare hardware and installed evaluation copies of both Hyper-V and VMWare. I also took the opportunity to see some Citrix systems in action. It wasn't really possible to do as a full a test as I would have liked due to pressure of work, but it soon became clear that the decision would come down between Hyper-V and VMware. I liked both and felt that either could do a really good job; it was just a case of which we felt we would be happier with in the long run.

At this stage, I managed to get some basic technical books for the two software products; I had hoped that this would help to make the decision a bit easier, but unfortunately, it didn't really help at all. In the end, I decided that we would go with VMWare; the product looked a bit more polished, it's been around longer and is more mature.

So at that point, I started to do some negotiation with the suppliers. This went on for a while, and yes, I played them off against each other. But ultimately, I managed to get a deal that I thought was worthwhile, that the supplier was happy with and that I could sell to the senior managers. There was a slight delay getting the stuff onsite, but it's all here now, and we are starting to install it; but that's going to be the topic for another occasion.

Thursday, 18 November 2010

Bookworm part 2

Just a (fairly) brief addendum to my previous post about the Amazon Kindle. I've taken a weeks holiday (I had a very nice time, thank you) and I made realy good use of the Kindle whilst I was away.

I'd ordered and downloaded a number of books beforehand; a bit of a mixture, some thriller, some technical stuff, some historical and some classics. I should note that all of these were free!

I'm not a sunbathing freak; I will do a bit of lying around, but generally get pretty bored after a while. I mostly used the Kindle in the evenings, after supper and just before going off to bed. However, there were a couple of occasions when I sat out on the balcony to catch some rays and used the Kindle to occupy my mind.

The screen is really easy to read even in bright sunlight (and it was bright) and the text is really clear. Changing pages is really simple; the buttons are on each side and have a nice solid feel to them. Changing books is not too difficult; but I do feel that the square button with ring for the selection and entry functions is a bit less solid.

If I had tried to take the same books with me in paper format, I would have required a much larger suitcase; stood on top of one another, they would have been at least 35-40 centimetres in height (14-15 inches in old money).

There is no doubt in my mind, the Kindle is a great little toy. If I didn't have one, I would say that it would be top of my wish list. I would say thought that I would advise getting a proper cover for it; I got a rather nice black leather one, but there are others in different colours and patterns. But each to his (or her) own.

I haven't seen the Sony e-reader, so can't compare it; but I have shown my device to some others who seem to think that they prefer the Kindle. (But that's just their opinion.)

At some stage, I think that I will subscribe to a magazine as well, and I'll do a write up to confirm how I get on.

Tuesday, 2 November 2010

Springboard Tour 2010

It's been a pretty busy weekend. I went up to Wembley to watch the NFL and stayed overnight so that I could get to Reading early on Monday morning to visit the Microsoft campus for the UK leg of the Technet Springboard Tour. This event was the only one in this country; the others are in major cities across Europe.

http://springboardseriestour.com/

The Springboard tour is about promoting the latest technology and providing opportunities for people to see the products in use. They also covered some of the reasons for migrating to the latest versions and highlighted tools and resources that can be used to make the process a lot easier.

I really like visiting the Microsoft Campus; there is always an energy and a buzz about the place that just makes you feel that it is great to work in technology. I believe that all too often, those of us at the sharp end get very isolated and develop a silo mentality to the work we do. It's important to take the chance to get out to see other people and understand that we are all part of a much larger community, that there are others that have exactly the same kind of problems and that there is more than one way of tackling the issues that we face.

The presentations were introduced by Stephen Rose - and I have a link to a video that he made a while ago. He says that he had drunk about 2 gallons of coffee before the filming and I can believe it!

http://www.youtube.com/watch?v=H2ewOGNGmZY

During the presentations, they made really good use of the demos to show just how you might improve the rollout and migration process. The tools provided are all available through the Technet site and many are improved versions of things that are already in use. There was someone with a video camera filming the event, so some of these may be added to the main site (link above) in addition to the preprepared videos.

Unfortunately, the sessions slightly overran - and there were a number of people that had to leave early, missing the final demo. This was of the Diagnostic and Recovery Toolset (DART). I'd very briefly heard of this before, but hadn't really had the chance to work with it. It looks like a really valuable asset for anyone providing any level of support to end users, and in particular anyone providing support for fatal errors. We will definitely be downloading it to give it a try in the next few weeks.

There was a bonus for those that attended; a free copy of Office 2010! There were also a few other little giveaways and prizes just to say thanks for being there. If you missed it, then you would have to go to one of the events on the continent, as there won't be another one in the UK. However, the presentations and information on the resources are on the Springboard site and I would recommend that you take the time to check it out.

As you may gather, I found the whole day a very good use of my time and really enjoyed the chance to talk to the various people. I am sure that I will be making really good use of the information that I picked up there in my daily work over the next few weeks.

Thursday, 28 October 2010

Bookworm

I've always been a bookworm. As a child, I was one of those that used to take a torch to bed so I could read under sheets. I used to go to the library and draw out a couple of books and read through them in a matter of hours.

Even now, I have large personal store of books. At the last count well over 700; a mixture of hard back and paper back. About 150 of these are technical reference books for various things or books for my studies.

When the concept of the ebook reader was first publicised, I was quite keen to see one. I thought that the concept was good and could see real value in it; but I wasn't quite so sure about the price. I've been hoping that some kind person would buy me one for a present (yeah right!) or that I might win one in some prize draw. But sadly, no such luck.

Anyway, a couple of weeks ago I decided that it was time for me to get one for myself. I had a number of Amazon vouchers which were from various sources, and I decided that I could trade these in as part payment on a Kindle. I bought one and a small leather wallet to keep it in. I also downloaded the software and got a number of free ebooks from the Amazon site.

The Kindle turned up just over a week ago, and I've been playing with it ever since. It is so good! The text is really easy to read even in strong light; I don't need to change the font size although that is an option. I had a couple of issues getting in synched through the wireless, but that was down to me typing the encryption key in wrong. Once I got correct, the device connected and updated everything straight away.

I've already gone through a number of books, and really enjoyed using the device. I don't think that I'm going to have a problem as it is supposed to hold about 3500 titles. At the moment, I've got some 2 dozen books stored; that should be enough for me to take on holiday in a couple of weeks.

The alpha numeric buttons are a bit on the small size, but as I don't use them that much, I don't see that as an issue. There are a couple of big buttons on the side to change pages and they are quite firm to use. The only real criticism is the silly button with the tiny square around it for the selection / entry; I'm sure that they could have designed something a bit more solid.

The Kindle also gives the option to have newspaper and magazine on the device; as you have to pay for those, I'm not so keen on the idea. But there is a particular magazine which I might sign up for, just to try it out. At 99p per month, I think that I can afford it. It's also supposed to allow you to read certain other types of files, but I haven't tried that yet.

As you can tell, I think that this is a great little device. I'm really pleased that I bought it, and I think it's well worth the money.

Monday, 4 October 2010

SharePoint Saturday 2010 UK

A couple of months ago, I first heard about the SharePoint Saturday UK event – not sure if it was through a tweet or an email. There have been a number of similar events around the world before, but this was the first in the UK.

http://www.sharepointsaturday.org/uk/default.aspx

I’m always interested in these types of events as they offer you the chance to learn new things, brush up on existing skills, and reinforce knowledge. It also offers the chance to network with other people in the industry, which I consider is always a useful exercise. On top of that, you often get the opportunity to speak with people that have highly specific knowledge of their topic.

SharePoint is a product that I have experimented with but purely for evaluation purposes. I believe that collaboration between staff is going to become a major initiative, and SharePoint is a tool that can really help bring people together and allow them to work more sensibly. I hoped that the event would enable to learn more about the latest iteration of the product and understand more about what it can do and what limitations it has.

The event was held at the Birmingham Hilton Metropole hotel at the NEC. This is a very nice location, quite central for most people (although a bit of a journey for me). The hotel had a lot of suitable resources and I think that it was a great location for the event. I should also add that the event was free to attend!

There were a really good mix of topics – some were quite technical, some were a bit more of a high level overview, so there was plenty for most people to get involved in. A couple even involved some demos of various issues which were really helpful. I particularly enjoyed the PowerShell administration demo by Penny Coventry; as I have been recently doing some work in this area, I was able to relate it to the stuff that I had been looking at, and had the chance to clarify a couple of small issues.

What was quite amazing was that the individuals organising and speaking at the event were doing so on their own time, and travelling to the event at their own expense. When you consider that a couple of them had travelled from the States, South Africa and further afield, this shows a particular level of dedication to the concept of passing on knowledge. Many other people have expressed their gratitude, and I think that I have to add my thanks as well; they certainly deserve high praise.

I also have to say that the buffet lunch provided was really excellent. I have to get the recipe for the Blue cheese, mascarpone and red onion quiche tartlets - they were really delicious and I must admit that I ate more than a few of them! Not good for the waistline, but for a one day event, very enjoyable indeed. My compliments to the chef!

Another big thank you has to go to the event sponsors; apart from paying for the whole day, they provided a large number of valuable prizes which were awarded at the end of the day. Among these were a Kindle, an iPad, an Xbox, about 70-80 books, t-shirts as well as some really valuable licences and training offers. There was almost enough on offer for most people to walk away with at least one bit of swag.

The day finished with SharePoint Saturday 2010 UK turning into SharePint; the chance for everyone to head for the bar. I carried out a completely unscientific study amongst a number of those present, and it was clear that everyone had had a great day; learned a lot, had the opportunity to see some really valuable demos and network with other like minded people.

If you missed the event and want the chance to see another, I would bookmark their web page and watch out for next year. I get the feeling that they hope that this can become an annual event. Certainly I wish them well; the work that was put in to organising it deserves the recognition, and I think that it could become a very valuable resource for anyone interested in learning more about an under rated piece of software.

Wednesday, 29 September 2010

The birth of a Third Platform

The BCS South West region hosts a number of events; I like to go along to these as they usually include some very interesting topics, but it’s also quite useful to network with other IT pros from different backgrounds.

At a recent event, there was a guest speaker from Apple; Lawrence Stephenson talking about “The Birth of a Third Platform”. He was discussing the rise in use of iPhones and iPads, particularly by students at schools or in University / Colleges and proposed that this is a new form of computing. Although primarily about higher education, much of what he discussed was also relevant to business.

The basic argument was that the mainframe systems were the first generation of computing, and the standard client / server technology that we have become used to, is the second generation. The third generation is therefore the use of mobile computing devices as access points to process or make use of data; hence the “third platform”.

He illustrated his talk with some interesting facts about the growth in the numbers of smartphones and tablet devices particularly among students. He also compared how these are used; to access email, social networking sites, general web browsing etc. He also identified that there were some were using their devices to access relevant items related to the student courses, but this was still a relatively small amount and that there was potential for growth in this area.

He demonstrated by showing some apps that had been developed for a university in the States; and these were clearly items that a student would find tremendously helpful, particularly for those new to university life, such as campus maps etc. All in all, a really good demonstration of just what can be done.

There was one very interesting comment though; he showed some statistics that could be used to suggest that most people actually use their device more for accessing data than they do for making phone calls. As such, there could be an argument for saying that it is quite possible than some future device might not actually have a phone capability as such; you would be more likely to contact people using IM or calls would be routed through an IP based utility such as Skype.

Of course, these types of devices are not new; tablets have been around for some 10 years. However, the advent of the smartphone has encouraged the development of small apps that allow people to do specific tasks really quickly and easily, and that has made a huge difference in the take up of people using mobile computing. As people have found new uses, it encourages more people to make use of them, and more developers to consider writing apps for specific requirements.

Most companies have “road maps” that give a structure to their research and development process and show the customer what they are working on for future products. Apple are a bit tight lipped about their vision for the future, so it is difficult to be certain about what they have in the pipeline. However, I would suggest that they (and many others) are working on the basis that there will be more people wanting to make use of mobile devices.

Who knows; maybe in the not too distant future, we won’t be using PCs any more, but will just do all of the work using a mobile device.

Monday, 13 September 2010

Watch the pennies

..and the pounds look after themselves. So the old saying goes.

Yes it’s that time of year again; time to think about next year’s budget. Our company financial year runs from 1st Jan to 31st Dec. The FD needs to check it over and approve it and he needs some time to cook the books (sorry, prepare the COA), so we need to get budget plans drawn up a few months before December.

I tend to start by writing a list of the specific jobs that we intend to do, plans to replace major hardware such as UPS or servers, major software upgrades like the move to Windows XP a few years ago. It can also include work that we think we will be required to do; currently we are waiting for the go ahead on new offices and they will have to be cabled up. I try to get a quote so that we have a fair estimate of the cost.

We have a lot of specialist software for CAD drawings etc. and these have quite expensive support costs. Added to that is the support costs for CRM, ERP and so on. In some cases, I think that it would be useful if these were in a separate budget, but they are not so I just have to get on and deal with it. I also add in an amount for other software upgrades.

The next step is to think about smaller hardware purchases; monitors, disks, cables, replacement printers etc. I also consider consumables; toner cartridges, disks and batteries. I try to work out what we have bought / used in these areas, then use that as a benchmark for the next year.

We also need to plan for Business Continuity / Disaster Recovery. This requires that we keep some spare equipment, pay towards a BC / DR partner and take appropriate actions to make sure that we can be flexible enough (and secure enough) to put things into place at short notice to allow the company to deal with sudden problems.

Once all of these items have been assessed, I put them into a spreadsheet. I tend to leave details on the form so that the FD can verify it; the more detail the better as it saves him pestering me. It also means that he sees part of the justification for the spending which gives him confidence that I have thought things through.

In our case, I also try work out roughly what we are likely to spend on travelling to work at our other sites; hotel, mileage allowance, flights as appropriate. In addition, I also added an allowance for some training costs as we have had to learn a lot of new skills around our ERP system and the training is absolutely essential. As it happens, the FD generally accepts my figures (although he does occasionally make some changes to match his numbers).

After all that however, the big thing is to try to stick to the budget. Sometimes this is easier said than done. Generally, the smaller amounts are easier to offset within a budget. For example, someone managed to destroy a laptop a few years (he ran over it; he forgot to put it in the boot of his car!) and we needed to replace this at short notice. We could just put that down as a replacement and not worry about it.

However, it is also possible to get a requirement for much larger items – we had to buy an add on disk unit for a server which had not been budgeted for – not the end of the world, but it meant we had to be a bit careful about some other spending.

But all in all, it seems to work for us. The FD is happy, the MD is happy, the staff are generally supplied with what they need, when they need it. We get to manage things ourselves, which is a lot better than having to justify every single item of expenditure. I’ve seen places where this happens, and I would not like to have to be working under those conditions.

Friday, 3 September 2010

New Skillz

Occasionally, I think back to when I first started working with PCs in the late 80s.

At that stage, there were relatively few companies made use of these and it was very much a hobby, although one that I enjoyed. I managed to get hold of some second hand equipment and by trial and error, worked out what everything was and how it worked.

In the mid 90s, I had the chance to work with computers as a job; primarily in a customer support capacity, but I also looked after the company hardware, network and server (yes we only had the one). In those days, it was considered normal that someone working in IT would have a broad range of skills and be able to turn their hand to whatever task was needed.

But in the last 10 years, we have seen a major change in the way that things work. There has been a considerable need for people to become more focussed in a specific area, whether that be database administration, programming, networking, telecoms etc. In the very big companies, they even have teams of people within these disciplines.

For the smaller shops like ours, this makes life a bit harder. We only have a couple of staff, but we still need to provide the same level of support on the newer systems. There is still an expectation that each of the IT staff has all of the relevant knowledge to instantly know how a product works, what is causing a problem, and with a wave of the magic wand, can fix it.

In the real world of course, it is completely different. In most cases we have some good general knowledge of hardware and some good experience of using a couple of products. We’ve then developed particular skills in specific areas. For example, I have had to do a lot of work with SQL server over the last couple of years, and although I wouldn’t describe myself as a DBA, I have a pretty good understanding of it. I also have had advanced networking and routing training, as well as some extra work in security.

Among the staff, we have each developed key specific skills; and we can share the work out in a way that allows us to be most effective. As a small team, we work quite closely, so still get the opportunity to broaden our skills base, probably far more than those in larger teams would be able to do. But we still have to learn those new skills and there is no question that even within a team the size of ours, there is a definite division of labour based upon speciality.

There are of course many companies that suggest we should outsource some of the work: and I can see a certain value in that. But I have not yet seen any outsourcing operation that will provide the level of support at an acceptable price that meets what we currently provide. It’s also likely that if we did outsource part of the work we do all that would then happen is that the users / management would still insist that we try to fix things for them anyway, defeating the purpose of outsourcing.

So for the moment, we just have to try to learn as much as we can, as quickly as we can (and probably as cheaply as we can). I’m looking forward to the day when we can get the plug in brain nodes that allow us to download information directly into our brains, without the pain of going through the learning process!

Thursday, 26 August 2010

Cool!

Some years ago, we undertook a small experiment with our server room. We had heard that other people were reducing the amount of A/C cooling they used and we wanted to see if it was appropriate for us.

Like a lot of other places, our small server room was kept cool to keep the servers cool; if we were to spend any length of time in there, we would need to put on a jumper or even a fleece to stay warm, as the room was around 10 degrees centigrade. The A/C units were running non stop, and we wanted to see if we could reduce the electric we used.

Essentially, we made a load of measurements to get a base line. These included the core temperature of the UPS, some measurements of the servers and various places within the server room. We were fortunate that our engineering manager had a device that we could borrow for this as he was conducting a number of tests to help the company work towards ISO14001.

He also had a device that allowed us to measure the amount of power drawn by various devices – we seemed to get a couple of slightly odd readings, but when we discounted those, the average values appeared to match what would be expected. We therefore assumed that the errors we had were down to incorrect use.

Having got our base line values, we then started to increase the ambient temperature of the room, and examine what affect this had. Each time, we would leave the changed settings for a couple of weeks to see what would happen; in each case, there was no sign of distress on the servers, so we were able to increase the temperature again.

After some time, we found that the “sweet spot” was between 20 and 24 degrees centigrade. Above 24 degrees, we would see the fans in the servers starting to work much harder and draw more power. Below 20 degrees, the A/C was still running almost all the time. However, in that range, we found that we had the A/C unit running at its least power draw whilst the servers ran at a comfortable level.

We found that in the racks, we had a few “hot spots”; places where the temperature was quite a bit higher than the ambient temperature of the room. We were told that this is normal and generally considered a good thing; these create a thermal current that allows the cooling to happen naturally. The interesting thing was that although the ambient temperature increased by 12 degrees, the hot spots only increased by 3-4 degrees.

Part of the work meant that we had to make sure that the racks were properly positioned in the room to allow for adequate air flow, and the direction of air from the A/C also had to be optimised to prevent “air curtains” forming at various places. We also had to make sure that things such as blanking plates were used to ensure a properly controlled air current within the racks.

Although this all sounds very grand, the room is quite small and most of the work was done in between our normal activities. We were able to make use of some additional advice from the A/C supplier, but that was relatively minor. The total amount of work required was actually quite small, but the results have been very good. We have seen a reduction in power consumption of just under 50% for the server room as a whole – which translates into significant cost savings.

I’ve added a link to a resource that I would recommend to anyone wanting to do work on their server room facilities. It is primarily aimed at North America, but there are some bits that are specifically for the European market. It will take some time to go through all of it, but I consider that it would be time well spent.

http://www.schneider-electric.com/sites/corporate/en/products-services/training/energy-university/energy-university.page?tsk=77518T&pc=26947T&keycode=77518T&promocode=26947T&promo_key=26947T

The really good thing - we now have a server room that we can work in, in reasonable comfort all the time!

Tuesday, 24 August 2010

I'm back!

It's been a while since I posted anything; 6 months in fact. It's not a case of having nothing to write about, far from it. I've just been very busy, plus I've been a bit more active in other areas.

One thing that I thought would be appropriate to point out is a Microsoft resource at: http://www.microsoft.com/uk/business/peopleready/technology/ioassessment/osyci/survey.mspx
This allows you to take a "survey" that can give you an indication of the status of your IT provision. I first came across this a while back and I found it very useful as part of the planning process. In order for you to reach a particular destination, it helps to know where you are starting from, so you can use the right directions.

Essentially, Microsoft suggest that IT departments can be classified into one of 4 levels based upon standard practice. Five years ago, we would have definitely been classed as being at the lowest level, "reactive". The IT provision was based around fixing problems after they occurred and very little thought went into planning or preparation.

We've slowly moved through the various stages, going from "standardised" to "rationalised", and are now pretty much at the top level, "strategic". There are still a few areas that we could improve upon, but that will always be the case. However, the IT is now a solid platform that people can use. We don't get the network failures, system crashes, or data losses that used to occur. Resources are there and available 24 x 365 for people to use, and generally they can access them using whatever device is appropriate.

Now although this all sounds great, there is unfortunately a fly in the ointment. The biggest problem is still the unit that is positioned between the chair and keyboard! It has been identified that we need to get people better trained, but somehow that never seems to get translated into action. Once of the worst instances was of a person that had been with the company for some 8 years. Unable to logon, the person phoned the helpdesk to ask what her user name was! (She normally didn't have to type that in, as it just appeared in the login box.)

I would encourage everyone to take a look at the Microsoft Core Infrastructure Optimisation resource. I think that you'll find it of significant value and help.

Friday, 12 February 2010

BCS - Computer Forensics

For some time, I've been working towards a post graduate degree through the Open University. It's hard work, particular after a long day when all you want to do is switch off and relax. However, I find the courses fascinating and of help to me in my daily work, so I keep on working on it.

The last course I did was particularly interesting - Computer Forensics and Digital Examinations. This is a very technical issue, but it also requires an understanding of legal procedures. It isn't enough to say "I found so and so", you have to demonstrate that the evidence is relevant, accurate, consistent and to present it in a way that non-technical people can understand it. I found it all really interesting, if not totally linked to my daily job.

So when the BCS South West indicated that they were holding an evening event and the topic was Computer Forensics, I jumped at the chance to attend. It was at the University of Plymouth, which is a really nice venue, if a little bit of a trek to get to from where I live. The speaker was a visiting professor, John Haggerty from Manchester and the presentation was lively and informative. The actual notes should be available at this link. http://www.bcssouthwest.org.uk/server.asp?page=pastevents

For me, the presentation covered most of the items that I has previously studied and it was really good to refresh my memory. It was also interesting to see that after such a short time since I did my course, there are a number of changes that have occured and the discussion after the talk highlighted some of the issues facing practitioners in that field.

One thing that is of interest - Digital Forensics is a field that is wide open for people to move into. However, there are a lot of people that think just because they have a small amount of experience in running a computer, they think they know what to do to examine it. Professor Haggerty referred to this as the "CSI" affect - people see the TV shows where someone drives an expensive car, goes to a pristine work space and in half an hour recovers all the require information to solve the case (and the impossibly attractive woman is suitably impressed by the display of brain power!).

In reality, Forensics is a long tedious job. Everything has to be documented, step by step and assumptions made have to be justified. There are a number of practioners that have had their reputations destroyed by a simple mistake, and once that happens, they are unlikely to be able to work in the field again.

As the technology moves on, the process of the examination gets harder - I can remember when I bought my first hard drive of 20 MB and I wondered how I would ever fill it up. I now regularly work with physical hard drives of 500 GB and logical partitions of over 1 TB. To properly analyse and document such a drive can take a very long time and new tools are being developed to try to make the analysis easier, but it still requires considerable patience.

But all in all, a great evening - a fascinating topic, well presented. An dfor those IT people that think the BCS is only for academics, I would strongly suggest that you go along to one of the (free) events - I'm sure that you'll change your mind.

Friday, 29 January 2010

Hard driving SQL

We have been working on installing an SAP ERP system for some time now. It went live in the latter part of 2009, and almost immediately we started to get some performance issues. After some discussions, we were advised that we should move a number of components form the SQL server to separate disks.

The server had originally been set-up to the specific instructions of the system integrators, and they had carried out the installation of their software. We had 2 logical drives; the operating system on the C: drive and the rest of the product on drive D:.

Essentially, they now advise that we should put the paging file, tempdb files, and transaction log files all on separate logical drives. This does make sense; with the extra drives, there will be less data being processed at the same time by the same equipment. However, the server we have is an HP Proliant DL380 with space for just 6 drives. As all the slots are full, we can’t physically add any more to the existing device.

However, there is a way around this; HP sell external disk arrays which can be added to an existing server. In our case, we obtained the MSA 20 unit which hold 12 SATA drives and this is connected to an HP 6400 SmartArray controller card. We ordered all of the required equipment back before Christmas, but unfortunately we had a series of problems getting the hardware. The bad weather didn’t help as we are a bit out on a limb, but the various bits were coming from different depots, so weren’t despatched together.

Laste week after all of the equipment had finally turned up, ee set-out to do a test of the process of adding the hardware and this went through fairly well. It toook us about 5 hours as we wanted to double check everything at each stage to make sure it worked; we had not had the chance to do something like this before and wanted to be certain it would work. We made notes of the steps and waited for the Sunday so that we could make a start on adding the new hardware to the main system.

The controller card was very easy to add. Pop open the cover, lift out the holder, insert the card and replace the cover. I also connected the cable to the disk array at the same time as I found that easier than trying to fiddle about in the back of the rack trying to make the connection. The slot that the cable uses on the back of the card is quite small and difficult to reach when the server is back in place.

When we fired up the server, it ran through the normal POST routine, and it quickly identified the new Smart Array device. It took a while for the disks to initialise; about 12-15 minutes for them all. However, we then hit our first snag; when it reached the end of the initialisation, it suddenly crashed and re-booted. Funny thing though, when the server restarted, it went back to the initialisation routine and then completed perfectly.

It was then necessary to set-up the logical drives and this is really easy to do. Within the configuration utility, just select the physical drives, the type of RAID and away you go. We chose to put 3 drives at a time in a RAID 5 configuration. It should give the space we need, the protection that it wanted and we get 4 logical drives (12 HDD divided by 3 = 4). With all 4 done, we could then re-boot the server, and see the new drives in the disk manager – we set it to create a new partition on the logical drives and format appropriately.

All of this took us about an hour, perhaps just a bit over. We then moved the paging file and set it to a slightly larger size than before – a quick reboot and still everything was going well. We copied the tmpdb folder to a new drive and then used a SQL script that we had found for dropping it and then re-attaching to the new location. It took literally only a few seconds to do and we were starting to get really cocky. Then it all went wrong.

We stopped the SQL service to copy the transaction log over – all 38 GB of it! We then started the copy process and it took ages. It seemed to copy about 8GB and then it would pause for ages (almost 20 minutes), before then carrying on. We got a point where it had reached around 12-14 GB and the damn server blue screened (one of the few occasions that we have seen Windows Server 2003 do a BSOD).

It turned out to be a paging fault error – once started we modified the paging file to put it back to the same minimum size that it had been, although we left it on the same max size. I restarted the copy process and we waited.. and waited… and waited…. and waited…..

Evetually after about another two and half hours, the copy process finished. We then ran the SQL commands to change the database to point to the new trans log location and once done, we verified that this was correct. We then ran up the ERP to make sure that it worked and it was good. By this time, it was well after 1:00 pm – we quickly finished everything off and locked up, then headed off to a local watering hole for Sunday lunch on the company.

And just to finish the story off, the technician’s wife works at that hotel. Whilst we were eating, she sent a note through from the back room, demanding to know where her dinner was. So a small piece was cut off of the dinner and put on a small plate to be sent out to her – 5 minutes later a message came back demanding to know where the ketchup was!

Wednesday, 6 January 2010

New year plans

So the holidays are over and we are all back to work – well almost. Unfortunately, the bad weather has caused some disruption, as a number of staff can’t get into work. Although that hasn’t affected IT staff, we are having to a do few things to help others out. Bet we don’t get any help from them when we need it later in the year!

I like to plan out what work we have to do – preferably at least a few months in advance. As such, I have a list of jobs and priorities against them and this gets updated throughout the year. At the moment, there are a large number of items for the next 3 months and quite a few for the second half of the year.

We are planning to go on a couple of specific training courses, there are some hardware and software upgrades, a couple of events that I feel would be appropriate for myself or my staff to attend and there are a number of jobs that need to be done as part of rolling maintenance programmes. We also have several projects under way and the various steps need to be arranged in the correct sequence and fitted in amongst the other work – plus of course we have the occasional problem that needs to be supported.

Unfortunately, there are several jobs that we cannot yet schedule – we are waiting for information from other people. One of our sites is proving to be a bit too small to handle the work load, so the company are looking at alternative locations. However, the senior managers can’t decide which of the newer sites would be most appropriate, so we can’t yet arrange for any work to be done that is required. Of course we know full well that when they do finally decide, they will expect all of the work to be complete within a few days!

In fact that move is going to be a much bigger task than they anticipate – once the decision is made they will then argue over the layout of the place and almost certainly, will change what they want on a daily basis. We will be cabling up the site for a network ourselves – it saves the company quite a bit of money although it does take up a bit of time. I’ve designed a particular method of network architecture that really works for us, and provides a great deal more flexibility and scalability than the way that these installatins normally get done. Most of the people doing cabling appear to be electrical installers, and they think CAT 5e can be treated like standard 2 core and earth and they seem to have a real problem if you ask for work to be done in a particular way.

On top of that, we have get the telephone lines moved, get an ADSL connection and move all the IT equipment ourselves – the last time we had a move, we also ended moving all the desks and cabinets as well. The staff seemed to think that they could just close down the PCs, put on their coats, pick up their handbags and walk to the new site to find the desks all set up, the PC installed and turned on for them! They were rather upset to find that they were expected to do some of the work themselves!

So January is looking to be quite a busy month, what with one thing and another. Happy new year!