This past weekend I dug into an aspect of Windows Server codename “Longhorn” to personally check out something that I’ve been excited about for a while – a “server core” installation. Doing the Installation After burning myself a Beta3 disk, I fired it up and after a few basic screens (USEnglish keyboard, etc), I got this screen: I selected the CORE installation and proceeded. Chose “new installation” and a disk partition for the install and zoom: This installation went by very quickly, rebooted once and was ready to go. I then had to login as Administrator, setup a password, enable the firewall and do some other basic stuff. Anyway, then I did a recursive dir starting at the root to see what footprint the server core had in relation to a normal Windows Server.
Look at that, only 1.775GB installed on the entire disk. To contrast that, I installed a default build of the regular Longhorn server on a 14.6GB partition and it only had 3.79GB remaining free space.
Doing the math, I get:. Longhorn Server Core footprint: 1.78 GB. Longhorn Server default footprint: 10.81 GB So, the Server Core installation is only 16% of a default Windows Server installation. Why This is Cool for Security Can you say “reduced attack surface area”?
The disk space measurement is really just a proxy for the amount of code installed that the IT manager has to worry about in terms of managing security risk. I’m not claiming this was a Microsoft innovation, but it is chock full of security goodness. Much of what normal users think of as “part of” Windows is not present in a Server Core deployment.
All of these are absent:. The Windows Graphical User Interface gone. (a minimal set of graphics capability is present).
Internet Explorer gone. File Explorer gone. Media Player gone. Internet Information Server gone. much, much more gone In fact, describes the roles that are available in Server Core:.Active Directory Domain Services.Active Directory Lightweight Directory Services (AD LDS).Dynamic Host Configuration Protocol (DHCP) Server.DNS Server.File Services.Print Server.Streaming Media Services Additionally, there are some other optional features (e.g.
Subsystem for Unix Applications, Failover) available. My next step is to go back through Windows Server 2003 vulnerabilities over the past few years and see how many would have not been applicable for a theoretical “Server Core” build of WS2003. This should give me a ballpark for how much Longhorn server security could benefit going forward.
You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the and the. File system integrity. It is necessary because NTFS is not immune to file system corruption and uses the tool to fix transient and permanent problems such as bad sectors, lost files, missing headers and corrupt links. It is evil because chkdsk can take a long time to execute, depending on the number of files on the volume.
It requires exclusive access to the disk, which means users could be waiting for hours, or even days, to access their data. For additional reading These can make your life easier Use to analyze process performance data Chkdsk has evolved over the years just as disk drives continue to explode in size. Back in the mid-1990s with NT 3.51, a 1 GB disk was considered a large drive. Now, we have terabyte disks, combined with storage controller RAID functionality, that allows us to configure extremely large.
As disks get larger, administrators leverage the capacity for more users per disk, which translates to more user files. Unfortunately, chkdsk does not scale well when analyzing hundreds of millions of files, so administrators are reluctant to use large volumes due to increased potential downtime. Over the years, improvements have been made to hasten chkdsk's execution time. Switches have been added to chkdsk to skip extensive index and folder structure checking. Can also be configured to skip running chkdsk when a dirty volume is brought online. But these improvements only mask the underlying problem: Scanning a large disk with millions of files takes a very long time. The table below shows approximate chkdsk execution times for major versions of Windows.
Operating System Version 2 Million Files 3 Million Files NT4 SP6 48 hours 100+ hours Windows 2000 4 hours 6 hours Windows 2003 0.4 hour 0.7 hour 200 Million Files 300 Million Files Windows 2008 R2 5 hours 6.25 hours Chkdsk revamped In and in Windows 8, enterprise-class customers can finally have confidence when deploying multiterabyte volumes. Chkdsk has been redesigned to run in two separate phases: an online phase for scanning the disk for errors and an offline phase for repairing the volume. This was done because the vast majority of time spent executing chkdsk is spent scanning the volume, while the repair phase only takes a few seconds. Better yet, most of the new chkdsk functionality has been implemented transparently so you won't even know its running. The analysis phase of chkdsk now runs as a background task. If NTFS suspects a problem in the file system, it attempts to self-heal it online.
Errors of a transient nature are fixed on the fly with zero downtime. Any real corruption is flagged and logged for corrective action when it is convenient.
In the meantime, the volume remains online to provide immediate access to your data. Once every minute, the health of all physical disks is checked, and any problems are reported to event logs and management consoles, including the Action Center and the Server Manager. The corrective action usually involves remounting the drive, which takes just a few seconds. The amount of downtime for repairing corrupt volumes is now based on the number of errors to be fixed, not the size of the volume or the number of files. Using (CSVs) also benefit from the integrated chkdsk design by transparently fixing errors on the fly.
Whenever any corruption errors are detected, I/O is transparently paused while fixes are made to repair the volume and then automatically resumed. This added resiliency makes CSVs continuously available to users with zero offline time. The command line interface (CLI) chkdsk command is still available for fixing severely corrupt volumes.
In fact, several new options have been added to support the new design, including /scan, /forceofflinefix, /spotfix and /offlinescanandfix. There is also a new cmdlet called repair-volume to offer the same chkdsk functionality with PowerShell. A brief description of the new options is provided below.
Option Description Repair-volume PowerShell cmdlet that performs repairs on a volume OfflineScanAndFix Takes the volume offline to scan and fix any errors. Equivalent to chkdsk /f. Scan Scans the volume without attempting to repair it. All detected corruption is added to the $corrupt system file. Equivalent to chkdsk /scan.
SpotFix Takes the volume offline briefly and then fixes only the issues that are logged in the $corrupt file. Equivalent to chkdsk /spotfix.
Source: For example, if you suspect severe corruption with a particular volume, you can manually repair the drive by first scanning it to record any errors in the $corrupt system file. Then, when it is convenient to take the drive offline briefly, use the –SpotFix option to fix the errors: PS C: repair-volume –DriveLetter T –Scan PS C: repair-volume –DriveLetter T -SpotFix For more information on the repair-volume cmdlet, use the command get-help repair-volume –full.
Windows Server 2012 has many improvements to increase the availability of your data. Now you can have very large disks with hundreds of millions of files and not have to worry about chkdsk slowing your boot time. While most of the new chkdsk functionality is implemented transparently, the CLI chkdsk tool and the new repair-volume PowerShell cmdlet provide administrators with the ability to fix volumes manually. About the author: Bruce Mackenzie-Low, MCSE/MCSA, is systems software engineer with HP, providing third-level worldwide support for Microsoft Windows-based products, including Clusters and Crash Dump Analysis. With more than 20 years of computing experience at Digital, Compaq and HP, Bruce is a well-known resource for resolving highly complex problems involving clusters, SANs, networking and internals.
Longhorn Pillars: Indigo Connected Systems – The Power of Indigo Some of the elements Microsoft is toting to developers of Longhorn, is its:. Managed code capabilities, which emphasizes safety, when writing applications. SOA Service Oriented Architecture, which are structured applications that are a composite of different services. Web Services, the heart of Indigo Now we introduce Indigo, its web services element being built into the platform, allowing applications to use rich based Extensible Mark-up Language (XML) to be the conduit for applications to communicate with each other and not be a set of static objects.
Indigo is a set of technologies for developing connected applications on Windows Longhorn. Indigo provides a complete and flexible messaging platform for building connected applications independent of network topology. Indigo represents a new dimension in how we leverage the capabilities of a connected Internet through desperate systems taking web services to a whole new level.
The “Connected Systems” concept of Indigo makes Web services the foundation for interoperability and integration. Indigo connected systems also demand guarantees for secure and reliable communication – requirements that are often costly and difficult to implement. Indigo will radically simplify how the next generation of connected systems is built. It accomplishes this through three architectural design goals:. Built-in support for a broad set of Web services protocols. Implicit use of service-oriented development principles. A single API for building connected systems Indigo gives developers less complexity by extending developers existing knowledge of the.NET Framework 2.0, and by enhancing and extending the fundamental richness of developer solutions such as Visual Studio 2005.
Indigo offers a whole new level of excitement that will ignite a new set of applications expanding on top of the service oriented architecture infrastructure and exploit its capabilities in new ways. For the business and consumer markets, this is definitely a paradigm shift that offers greater level of sophistication when it comes to connecting up these applications with the web and building an infrastructure on top of the Internet.
False Positive Rate
Broad Support for Web Services Today’s Web services technologies provide support for basic interoperability between applications running on different platforms. However, most of these technologies lack the ability to achieve this interoperability with guarantees for end-to-end security and reliable communication. Indigo delivers secure, reliable, transacted interoperability through built-in support for the WS-. specifications.
For developers, this drastically reduces the amount of infrastructure code required to achieve heterogeneous interoperability. For businesses, it means the ability to interact with customers, partners, and suppliers both within and beyond the walls of the organization, regardless of the platform they use. Service-Oriented Design For years, developers and organizations have struggled to build software that adapts at the speed of business.
Michael M Swift
Service-oriented development principles help overcome this challenge with architectural best practices for building highly adaptable software. Indigo is the first programming model built from the ground up to provide implicit service-oriented application development. This enables developers to build services that are autonomous and can versioned independently of one another, thereby reducing long term upgrade and maintenance costs.
For businesses, this facilitates an IT infrastructure that is resilient to inevitable change and easier to manage over time. Unified Programming Model Traditionally, developers have had to use multiple technologies to build connected systems. This not only required them to learn disparate APIs, but also made it difficult to combine functionality from the different technologies into a single solution. Indigo provides the first unified API for developing all classes of connected systems. It combines and extends the functionality of existing Microsoft technologies (ASMX,.NET Remoting,.NET Enterprise Services, Web Services Enhancements, and System.Messaging) to deliver a single, highly productive development framework that improves developer productivity and reduces organizations’ time to market. In conclusion, we can say that Indigo provides the functionality and flexibility to appeal to organizations of all sizes and developers from diverse backgrounds.
It can be used to build connected systems that run within the context of a single machine, across company intranets, or spanning the global Internet. It addresses a broad spectrum of scenarios, from connected line of business and vertical applications to interactive multi-player games. In addition to extending the functionality of the.NET Framework 2.0 and Visual Studio 2005, Indigo can be used with BizTalk Server to provide both brokered and un-brokered application-to-application communication. And with support for Windows XP, Windows Server 2003, and Windows codename Longhorn, Indigo will radically simplify how the next generation of connected systems is built on the Windows platform. This feature information was obtained partially or in full from Microsoft at and are provided by ActiveWin.com for your convenience. For the most accurate information please visit the official site.
Microsoft retains all intellectual property rights.