• United States
Senior Editor

Target: the new data center

May 24, 200411 mins
Data Center

Scott Hopkins, vice president of technology planning for Harte-Hanks, shares his road map for an entirely virtual new data center.

Scott Hopkins always has one thing on his mind: how to make IT flexible enough to grow and shrink to match business goals. As vice president of technology planning at Harte-Hanks, a direct marketing services company in San Antonio, Texas (best known for its PennySaver publications), Hopkins ensures that business drives technology initiatives such as virtualization of storage, network and server resources. Hopkins says he’d like to see the lines between those distinct technology worlds blur so he can create one dynamic pool of data center resources. From the company’s Billerica, Mass., data center, Hopkins shared his vision of utility computing with Network World Senior Writer Denise Dubie.

We’ve heard a lot about the intelligent, automated data center of the future. How do you define the new data center at Harte-Hanks?

The new data center, or the utility model that I talk about, is looking at IT from the shared resource perspective and not necessarily how you pay for that resource. It’s a different way of looking at the resources and putting them to use for your business. To have a utility model, you need to have virtualization capabilities, and those virtualization capabilities cannot be segmented by tiered technology. When they can all be brought together, then we can achieve a truly dynamic data center.

In the past, data center managers would be very concerned about what servers they had, what technology they had vs. looking at the data center as a utility model and being able to combine resources to provide a service. We don’t look at just servers. We don’t look at just the network, and we don’t look at just the storage environment. You have to really look at all of that as a whole, and as a utility that has the ability to provide a service to the customer, whether it be internal or external.

Do you use virtualization today?

Right now, because of the technology and because the data center is separated into tiers – storage, server and network – we are only using quasi-virtualization. We use virtualization in the storage environment today. We also use resource management tools on the servers as well as the network through virtual LAN-type technology. And we use quality-of-service (QoS) tools to better use our virtualized network resources.

What are the advantages of using virtualization in these technology silos?

On the server side, we’ve been able to share more resources. Being able to logically provision server resources protects us from a security perspective and from a performance perspective. If we have multiple activities occurring on one server, none of those activities overrides the others from resource use. That’s given us a lot of flexibility.

From the storage side, we have been able to reclaim a significant portion of our storage environment just by having the capability of provisioning. We went from a direct-attached environment to a storage-area network that allows us to do ‘grade school’ virtualization. We’re not in ‘college’ yet, but we can better use those resources to provision storage based on business needs. If we were in college, we’d be able to allow much more sophisticated and complex virtualization to help us manage storage resources.

How does network virtualization work in your data center?

It’s about introducing QoS and [virtual] LAN technology, and it’s not having one network for everybody. We separate the network for security and performance reasons. Using VLANs guarantees performance and security. It’s more complex, but it doesn’t mean it’s harder to manage.

Have you been able to consolidate any of your storage, network or server resources?

For the past 12 months or so, we have been consolidating servers. From a management perspective, we put together an asset management plan that looks at the age of our technology and at how the technology is being used. Then [we can see] how we can collapse the number of servers to either newer technology, because technology changes so rapidly, or remove them altogether. In terms of just the chip speed, you can have four of five servers running at different or slower rates – that can cost you a lot. Understanding what we have provides us cost savings in two ways. We save in terms of the management as well as the maintenance of those environments.

But consolidation isn’t a one-time exercise. It’s a continual process because technology changes so rapidly, and you are always going to have aging servers. You need to have a plan in place to recycle technology. We were able to reduce our costs by phasing out older technology and transitioning network load to newer technology. It’s more expensive to maintain older stuff, especially when you have a lot of it. Support for older technology from vendors – and even skills in-house – sometimes seems to be harder to come by. The driver now is the ability to look at an infrastructure environment that gets us to the next step.

What is that next step?

Harte-Hanks’ data center today

Scott Hopkins, vice president of technology planning, lords over Harte-Hanks’ data center operations from his home base in Billerica, Mass. He takes us for a look inside that data center.


I can look at my server, my network and my storage tape environment, but what I don’t have is an overarching tool that ties those separate things together. That’s the next level of the data center: delivering a product that ties network, server and storage together in a framework that allows you to manage effectively and save money.

Where does automation come into play?

I am a great believer in automation. We have done things here that automate our tape management, that use tools to help us monitor and provision our storage environment and that allow us to provision our server environment. We have automated our job processing into a job scheduler. We have an automated help desk capability that alerts staff if certain technology doesn’t meet certain thresholds set for the network, the server or the storage resources. We’ve automated a lot of administration processes, but that’s really just a first step.

The next step is using that automation to move to a virtualization capability. Again, virtualization today is defined in three tiers: the storage, the network and the servers. The key for the new data center is creation of a virtualization tool that doesn’t do these separately. We have to look at this as a utility model, and not as virtualization of network, servers and storage. We need to first virtualize it as whole. Vendors are not going to make that happen without standards. And when I talk about standards, I mean both vendors agreeing on making their technology accessible and also IT managers agreeing on standard technologies to use in their organization.

What are your thoughts on vendors’ claims to provide self-managing, self-healing and other intelligent features in data center hardware and software?

Are you asking if I’m a religious person? To a certain degree, I am skeptical about what has been said. It really has to have been well executed and proven for Harte-Hanks to move down that path. Conceptually, it sounds good, but can the vendors execute on those features and can they validate them? I don’t know if they can. Today, I just don’t see the integration or the ability to execute on the overall framework from the vendors. I hope they will eventually get there. It would be great to have an integrated tool that looks into those virtualized tiers in an automated manner and also has the intelligence to tell you whether the resource is there so that the job can be completed. It’s not out there today.

You support automation, but you don’t believe the vendors can provide fully automated data centers?

It’s always a balance between having prudent management and very secure environments, and how you go about using automation. The virtualization technology is where we need to go, but it’s still going to have to be managed by a human. As long as we have that flexibility, then we will have the capability to execute on it. There are so many things that come into consideration to do that execution that can’t be automated today. That intelligence and that knowledge can’t be pulled into tools today.

How do you see applications fitting into the new data center?

To a certain degree one has to follow the other. You need to have the technology in the data center that forces the application providers to change the way they do their processes. You need to have applications that are more parallelized in processing. By that I mean not single-thread or applications that can adopt a one-to-many model. You need an application environment that supports applications sending their commands and requests out to multiple resources. The database needs to support that, the servers, the storage and the network, all before the applications. The application host needs to be able to support how you designed the infrastructure. You need to have the infrastructure that provides that capability and then applications change to meet that.

The virtualization is in the environment that touches our customers. There are certain things on the back-end that aren’t as high a priority and don’t need to have this reliable, scalable, affordable infrastructure in place. But that’s just our IT shop. Another shop that is a pure IT shop, and not a service bureau or hosting business, probably has a different vision. Our IT demands change when we get new customers or old customers leave us. What we want for the future is for that to be easier, more manageable and less costly.

What challenges would you advise peers to tackle first on their path to the new data center?

One of the first things to do is take inventory of the skills of your people. To be effective in this environment, the systems admin, the storage admin, the [database administrator], their roles may change, and to a certain extent their skills may need to change. The human element of moving in this direction is critical.

Harte-Hanks vital stats

Primary business: Direct and targeted marketing services and shoppers operations.

Founded: 1920s

Corporate headquarters: San Antonio, Texas

Locations: About 40 worldwide

Revenue: $944.6 million for fiscal year that ended Dec. 31, 2003.

Net income: $87.4 million for fiscal year that ended Dec. 31, 2003.

Employees: 6,371

Recent acquisitions: Avellino Technologies (provider of data profiling technology), in February 2004.

Fun fact: Harte-Hanks is the successor to a newspaper business begun in the early 1920s by Houston Harte and Bernard Hanks.

What technology hurdles should be addressed first?

You need to invest in certain standards, and those standards are not by manufacturer but by technology. Once you have those standards, then you can look at tool selection. Once you understand the human capital and what the standards are, then you can approach the vendor community to understand the tool set that you will need in order to go down this path and see how well vendors match up with your needs.

What do IT managers need to ask their vendors?

You want to understand not what the vendors are necessarily doing today, but where the vendors will be in the future, what their product plans are and what their strategic directions are for technology. Then you need to weigh the validity of their ability to obtain that and execute on that against your plans.

How much should business drivers contribute to new data center technology decisions?

They are one in the same. Being able to tie technology to where the business is going and being able to execute on that is what management needs to be focused on. The technology involves another set of decisions. You need to have the people that can help you get down that path.You need to make sure you communicate with the people so they can either supplement their skills, or you can assist them in that. You need to tie it to the long-term goals, you need to develop standards, and you need to do the vendor selection.