Quantcast
Channel: Ask the Core Team
Viewing all 270 articles
Browse latest View live

FREE online event: Virtualizing Your Data Center with Hyper-V and System Center

$
0
0

Virtualizing Your Data Center with Hyper-V and System Center
Free online event with live Q&A: http://aka.ms/virtDC
Wednesday, February 19th from 9am – 5pm PST

What: Fast-paced live virtual session 
Cost: Free
Audience: IT Pro
Prerequisites: For IT pros new to virtualization or with some experience and looking for best practices.

If you're new to virtualization, or if you have some experience and want to see the latest R2 features of Windows Server 2012 Hyper-V or Virtual Machine Manager, join us for a day of free online training with live Q&A to get all your questions answered. Learn how to build your infrastructure from the ground up on the Microsoft stack, using System Center to provide powerful management capabilities. Microsoft virtualization experts Symon Perriman and Matt McSpirit (who are also VMware Certified Professionals) demonstrate how you can help your business consolidate workloads and improve server utilization, while reducing costs. Learn the differences between the platforms, and explore how System Center can be used to manage a multi-hypervisor environment, looking at VMware vSphere 5.5 management, monitoring, automation, and migration. Even if you cannot attend the live event, register today anyway and you will get an email once we release the videos for on-demand replay! 

Topics include:

  • Introduction to Microsoft Virtualization
  • Host Configuration
  • Virtual Machine Clustering and Resiliency
  • Virtual Machine Configuration
  • Virtual Machine Mobility
  • Virtual Machine Replication and Protection
  • Network Virtualization
  • Virtual Machine and Service Templates
  • Private Clouds and User Roles
  • System Center 2012 R2 Data Center
  • Virtualization with the Hybrid Cloud
  • VMware Management, Integration, and Migration

Register here:http://aka.ms/virtDC

Also check out the www.MicrosoftVirtualAcademy.com for other free training and live events.

John Marlin
Senior Support Escalation Engineer
Microsoft Global Business Support


RAP as a Service (RaaS) from Microsoft Services Premier Support

$
0
0

In this post, I’m excited to discuss a new Premier Support offering called Risk Assessment Program (RAP) as a Service (or RaaS for short).

For those that are not familiar with RAP, it is a Microsoft Services Premier Support offering that helps prevent serious issues from occurring by analyzing the health and risks present in your current environment.

For example: if you haven’t done a WDRAP (Windows Desktop RAP) and your end-users are suffering slow boot times, slow logon times, slow file copy, hung applications, and applications crashing, it could help! A WDRAP assesses your current environment and recommends changes which improve the Windows user experience.

Our new RAP as a Service offering helps accelerate the process of diagnosis and reporting, using our RaaS online service.

Q: So what is Microsoft RAP as a Service (RaaS)?
A:  RaaS is an evolution of the Risk Assessment Program offering.

  • RaaS is a way of staying healthy, proactively.
  • It’s secure and private.
  • The data is collected remotely.
  • We analyze against best practices established by knowledge obtained from Microsoft IT, and over 20,000 customer assessments.
  • It enables you to view your results immediately.

You can also take a look at this video describing RAP as a Service:

 

Microsoft RAP as a Service


Q:  What are the benefits of RaaS over a RAP?
A:  The benefits are:

  • Online delivery with a Microsoft accredited engineer.
  • A modern best-practices toolset that allows you to assess your environment at any time and includes ongoing updates for a full year.
  • You get immediate on-line feedback on your environment.  Just run the straightforward toolset and you’ll garner instant insight into your environment.
  • Easily share results with your IT staff and others in your organization.
  • You can reassess your environment to track remediation and improvement progress.
  • Reduced resource overhead requirements.  There’s no need to take your people away from their other work for multiple days, nor do they need to travel to the location where the work is being performed.
  • Better scheduling flexibility.  Due to the agile structure of the RaaS service offering, turnaround times to get a Microsoft accredited engineer to review your environment are much shorter.
  • Better security.  While both offerings are highly secure, RaaS has the added benefit of including no intermediary steps in the assessment process.
  • RaaS includes remediation planning, which helps you understand what’s required to get your environment optimally healthy.
  • A broader toolset that is continually enhanced.   For example, RaaS for Active Directory includes assessment checks that were previously available as two separate service offerings: an Active Directory RAP and Active Directory Upgrade Assessment Health Check.  These are combined in the Active Directory RaaS. RaaS also includes additional new tests such as support for Windows Server 2012.

Q:  What technologies can be assessed using RaaS?

… and others coming soon, such as Hyper-V and more.  Please contact your Microsoft Premier Support Technical Account Manager for further info on availability.

Q:  I can’t wait until the releases of the other technologies!
A:  In the meantime, you can still request a RAP for those technologies until these are released with RaaS.

Q:  Is RaaS currently available for non-Premier Support customers?
A:  Not at this time. To find out more about Premier Support, please visit Microsoft Services Premier Support  

Q:  Do I use the RaaS service for my environment before or after going into production?
A:  Both. We highly recommend you test your environment before going live using RaaS.  We also recommend using RaaS after you go in production, because changes between test and production are inevitable.

Q:  What are the system requirements for a RaaS?
A:  The Microsoft Download Center has a detailed description of RAP as a Service (RaaS) Prerequisites.

Q:  How do I schedule a RaaS?
A:  Talk to your Microsoft Premier Support Technical Account Manager (TAM) or Application Developer Manager (ADM), and they can schedule the RaaS.

Q:  Where would I go to sign-in for the RaaS?
A:  You browse to the Microsoft Premier Proactive Assessment Services site and enter your credentials. The packages will be waiting for you to download and start running.

Q:  I’m in a secure environment; we cannot access external websites.
A:  It’s alright! We have a portable version for your needs.

Q:  Does it take a lot of ramp-up time to get familiar with the toolset?
A:  No, the package is wizard-driven for ease of use.

Q:  Do I need to have any down time?
A:  No, the data collection is non-invasive, so no scheduled downtime is required. Collect the data on your own schedule.

Q:  OK, I collected the data, now what are my next steps?
A:  Once data collection is complete, you can submit the data privately and securely to the Microsoft Premier Proactive Assessment Services site for analysis.

Q:  When do I get to see my results?
A:  We (the accredited Microsoft engineers) will analyze and annotate the report for your specific environment.  Once you receive the report back, we will set up a conference call to go over the findings with your staff.

Q:  How long is the report available for us?
A:  The report is available online for twelve months so you can continue remediating any issues/problems.

Q:  Can I re-run the RaaS toolset?
A:  Yes, you get to re-collect the data, submit the data again and get the detailed analysis back for a whole year, as a Premier customer.

Q:  Can I still have Microsoft Premier Field Engineers come on-site?
A:  Yes, we still have that option available to assist you! Regular RAPs are still available.

Thank you and I hope you found this useful and something you can take advantage of.

John Marlin
Senior Support Escalation Engineer
Microsoft Global Business Support

What’s New in Windows Servicing: Part 1

$
0
0

My name is Aditya and I am a Senior Support Escalation Engineer for Microsoft on the Windows Core Team. I am writing today to shed some light on a the new changes that have been made to the Windows Servicing Stack in Windows 8.1 and Windows Server 2012 R2. This is a 4 part series and this is the first one:

Windows 8.1 brings in a lot of new features to improve stability, reduce space usage and keep your machine up to date. This blog series will talk about each of these new features in detail and talk about some of the troubleshooting steps you will follow when you run into a servicing issue.

What is Servicing and The Servicing Stack: Windows Vista onwards use a mechanism called Servicing to manage operating system components, rather than the INF-based installation methods used by previous Windows versions. With Windows Vista and Windows Server 2008, component-based builds use images to deploy component stores to the target machine rather than individual files. This design allows installation of additional features and fixes without prompting for media, enables building different operating system versions quickly and easily, and streamlines all operating system servicing.

Within the servicing model, the update process for Vista+ Operating systems represents a significant advance over the update.exe model used in previous operating systems. Although update.exe had many positive features, it also had numerous issues, the foremost of which was the requirement to ship update.exe engine with each package.

Servicing is simplified by including the update engine, in the form of the servicing stack, as part of the operating system. The servicing stack files are located in the C:\Windows\WINSxs folder.

image

This folder can grow very large on Windows 2008 and Windows 2008 R2 system and more information can be found at why this happens at :

What is the WINSXS directory in Windows 2008 and Windows Vista and why is it so large?
http://blogs.technet.com/b/askcore/archive/2008/09/17/what-is-the-winsxs-directory-in-windows-2008-and-windows-vista-and-why-is-it-so-large.aspx

What’s new in Windows 8.1 and Server 2012 R2:

1. Component Store Analysis Tool:

A new feature has been added to the DISM command that will allow users to get detailed information on the contents of the Component Store (WinSxS folder).

There have been many users, mainly power users and IT Admins of Windows, who have raised concerns around the size of the WinSxS store and why it occupies so much space on the system. These users also have complaints about the size of WinSxS growing in size over time and are curious to know how its size can be reduced. A lot of users have questioned what happens if the WinSxS store is deleted completely. There have been multiple attempts in the past to explain what the WinSxS store contains and what the actual size of the WinSxS store is. For this OS release, a reporting tool has been created that a power user can run to find out the actual size of the WinSxS store as well as get more information about the contents of the Store. This is in addition to the article we will be publishing for users to understand how the WinSxS is structured, and what the actual size is as compared to the perceived size of this store.

The purpose of this feature is two-fold. First, is to educate power users and IT Admins of Windows about what WinSxS is, what it contains and its importance to the overall functioning of the OS. Second, this feature will deliver a tool via the DISM functionality to analyze and report a specific set of information about the WinSxS store for power users.

From various forums and blog posts, there seem to be two main questions that users have:

· Why is WinSxS so massive?

· Is it possible to delete WinSxS in part or completely?

In addition to this, OEMs do have questions about how they can clean up unwanted package store, servicing logs, etc. from the image.

Based on these questions, we felt that the most important metric for our tool would be the actual size of WinSxS. Secondly, it would be good to report packages that are reclaimable so that a user can startcomponentcleanup to scavenge them. Lastly, for devices like the Microsoft Surface, which remain on connected standby, it may be possible that the system never scavenged the image. In that case, considering that these tablets have small disk sizes, it becomes important to let users know when it was last scavenged and whether scavenging is recommended for their device.

We expect the amount of time for completion of the analysis to be somewhere between 40 and 90 seconds on a live system. In this scenario, there needs to be some indication of progress made visible to the user. We will use the existing progress UI of DISM to indicate the % of analysis completed to the user. The user will also get the option to cancel out of the operation through the progress UI.

The following steps describe the end to end flow of using the component store analysis tool:

· The user launches an elevated command prompt by typing Command Prompt on the Start screen.

· The user types in the DISM command:

Dism.exe /Online /Cleanup-image /AnalyzeComponentStore

· The process of analyzing the WinSxS folder can take anywhere between 40s and 90s on a live system, so it becomes important to show progress to the user. The user sees progress in terms of the % of analysis completed, in the standard progress UI supported by DISM.

At the end of the scan, the user gets a report of the results like this:

image

2. Component Store Cleanup:

The Component Store Cleanup functionality is one of several features aimed at reducing the overall footprint and footprint growth of the servicing stack. Reducing the footprint of Windows is important for many reasons, including providing end users more available disk capacity for their own files, and improving performance for deployment scenarios.

Component Store Cleanup in Windows 8 was integrated into the Disk Cleanup Wizard. It performs a number of tasks, including removing update packages that contain only superseded components, and compressing un-projected files (such as optional components, servicing stack components, etc.). For Windows 8.1, we will add the capability to perform deep clean operations without requiring a reboot.

Today, Component Store Cleanup must be triggered manually by an end-user, either by running DISM, or by using the Disk Cleanup Wizard. In order to make Component Store Cleanup more useful for the average end-user, it will be added into a maintenance task, automatically saving disk space for end-users. To enable this, a change will be made to allow uninstallation of superseded inbox drivers without requiring a reboot (today, all driver installs/uninstalls done by CBS require a reboot).

The superseded package removal feature of deep clean attempts to maintain foot print parity between a computer that has been serviced regularly over time vs. a computer that has been clean installed and updated.

2.1. How can Component Store Cleanup be initiated?

Component Store Cleanup will support being initiated in the below 3 ways:

1. Dism.exe /online /Cleanup-Image /StartComponentCleanup

clip_image002

2. Disk cleanup wizard :

a. To open Disk Cleanup from the desktop, swipe in from the right edge of the screen, tap Settings (or if you're using a mouse, point to the lower-right corner of the screen, move the mouse pointer up, and then click Settings), tap or click Control Panel, type Admin in the Search box, tap or click Administrative Tools, and then double-tap or double-click Disk Cleanup.

b. In the Drives list, choose the drive you want to clean, and then tap or click OK.

c. In the Disk Cleanup dialog, select the checkboxes for the file types that you want to delete, tap or click OK, and then tap or click Delete files.

d. To delete system files:

i. In the Drives list, tap or click the drive that you want to clean up, and then tap or click OK.

ii. In the Disk Cleanup dialog box, tap or click Clean up system files. clip_image003 You might be asked for an admin password or to confirm your choice.

clip_image005

c. In the Drives list, choose the drive you want to clean, and then tap or click OK.

d. In the Disk Cleanup dialog box, select the checkboxes for the file types you want to delete, tap or click OK, and then tap or click Delete files.

clip_image006

e. Automatic from a scheduled task:

i. If Task Scheduler is not open, start the Task Scheduler. For more information, see Start Task Scheduler.

ii. Expand the console tree and navigate to Task Scheduler Library\Microsoft\Windows\Servicing\StartComponentCleanup.

iii. Under Selected Item, click Run

clip_image008

The StartComponentCleanup task can also be started from the command line:

schtasks.exe /Run /TN "\Microsoft\Windows\Servicing\StartComponentCleanup"

For all three methods, an automatic scavenge will be performed after the disk cleanup in order to immediately reduce the disk footprint. When scavenge is performed for option 1, NTFS compression will not be used since it has a negative impact on capture and apply times, but Delta Compression will be used since it will help with both capture and apply. When run automatically for option 3, deep clean and the scavenge operation will be interruptible in order to maintain system responsiveness.

2.2. What does Component Store Cleanup do?

During automatic Component Store Cleanup, packages will be removed if the following criteria apply:

§ All components in package are in superseded state

§ Packages are not of an excluded class (permanent, LP, SP, foundation)

§ Package is older than defined age threshold

· Only packages that have been superseded for a specified number of days (default of 30 days) will be removed by the automated deep clean task. In order maintain user responsiveness automatic Component Store Cleanup will perform package uninstall operations one at a time, checking to see if a stop has been requested in between each package.

· The Component Store Cleanup maintenance task will be incorporated into the component platform scavenging maintenance task. This task runs every 1 week, with a deadline of 2 weeks. This ensures that scavenging and deep clean processing happens relatively quickly after patches are released on patch Tuesday.

clip_image010

Aging and removal of superseded packages

To determine when to remove superseded packages deep clean will track the “age” of superseded packages and remove those packages that have aged beyond the specified number of days.

The registry value, DeepCleanAgeLimit, under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Servicing

will specify the number of days that a package should be “aged” (default is 30) before being removed. Each superseded package will be marked as deep clean discovers them with an 8 byte time stamp in the registry value named “SupersededTime”, under the package key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing\Packages\<package>

The algorithm looks like this:

clip_image012

Manual Component Store Cleanup

During manual Component Store Cleanup, packages will be removed if the following criteria apply:

· All components in package are in superseded state

· Packages are not of an excluded class (permanent, LP, SP, foundation)

clip_image014

The functionality for manual Component Store Cleanup largely already exists in Win8. To improve performance, manual deep clean will perform all package uninstall operations in a single KTM transaction, and is not interruptible. Superseded packages are not subject to an age limit. Instead they are removed immediately.

The next blog in the series we will discuss more about Delta Compression & Single Instancing…

Aditya
Senior Support Escalation Engineer
Microsoft Platforms Support

What's New in Defrag for Windows Server 2012/2012R2

$
0
0

Hello everyone, I am Palash Acharyya, Support Escalation Engineer with the Microsoft Platforms Core team. In the past decade, we have come a long way from Windows Server 2003 to all the way to Windows Server 2012R2. There has been a sea-change in the overall Operating System as a whole, and we have added/modified a lot of features. One of these is Disk Defragmentation and I am going to talk about it today.

Do I need Defrag?

To put this short and simple, defragmentation is a housekeeping job done at the file system level to curtail the constant growth of file system fragmentation. We have come a long way from Windows XP/2003 days when there used to be a GUI for defragmentation and it used to show fragmentation on a volume. Disk fragmentation is a slow and ongoing phenomena which occurs when a file is broken up into pieces to fit on a volume. Since files are constantly being written, deleted and resized, moved from one location to another etc., hence fragmentation is a natural occurrence. When a file is spread out over several locations, it takes longer for a disk to complete a read or a write IO. So, from a disk IO standpoint is defrag necessary for getting a better throughput? For example, when Windows server backup (or even a 3rd party backup solution which uses VSS) is used, it needs a Minimum Differential Area or MinDiffArea to prepare a snapshot. You can query this area using vssadmin list shadowstorage command (For details, read here). The catch is, there needs to be a chunk of contiguous free space without file fragmentation. The minimum requirement regarding the MinDiffArea is mentioned in the article quoted before.

Q. So, do I need to run defrag on my machine?

A. You can use a Sysinternal tool Contig.exe to check the fragmentation level before deciding to defrag. The tool is available here. Below is an example of the output which we can get:

clip_image001

There are 13,486 fragments in all, so should I be bothered about it? Well, the answer is NO. Why?

Here you can clearly observe that I have 96GB free space in C: volume, out of which Largest free space block or Largest Contiguous free space blockis approximately 54GB. So, my data is not scattered across the entire disk. In other words, my disk is not getting hammered during a read/write IO operation and running defrag here will be useless.

Q. Again, coming back to the previous question, is defrag at all necessary?

A. Well, it depends. We can only justify the need for defrag if it is causing some serious performance issues, else it is not worth the cost. We need to understand that file fragmentation is not always or solely responsible for poor performance. For example, there could be many files on a volume that are fragmented, but are not accessed frequently. The only way someone can tell if they need defrag is to measure their workload to see if fragmentation is causing slower and slower performance over time. If you determine that fragmentation is a problem, then you need to think about how effective it will be to run defrag for an extended period of time or the overall cost in running it. The word cost figuratively means the amount of effort which has gone behind running this task from an Operating System’s standpoint.  In other words, any improvement that you see will be at the cost of defrag running for a period of time and how it might interrupt production workloads. Regarding the situation where you need to run defrag to unblock backup, our prescriptive guidance should be to run defrag if a user encounters this error due to unavailability of contiguous free space. I wouldn’t recommend running defrag on a schedule unless the backups are critical and consistently failing for the same reason.

A look at Windows Server 2008R2:

Defragmentation used to run in Windows Server 2008/2008R2 as a weekly once scheduled task. This is how it used to look like:

clip_image003

The default options:

clip_image005

What changed in Server 2012:

There have been some major enhancements and modifications in the functionality of defrag in Windows server 2012. The additional parameters which have been added are:

/D     Perform traditional defrag (this is the default).

/K     Perform slab consolidation on the specified volumes.

/L     Perform retrim on the specified volumes.

/O     Perform the proper optimization for each media type.

The default scheduled task which used to run in Windows Server 2008R2 was defrag.exe –c which is doing a defragmentation in all volumes. This was again volume specific, which means the physical aspects of the storage (whether it’s a SCSI disk, or a RAID or a thin provisioned LUN etc.) are not taken into consideration. This has significantly changed in Windows server 2012. Here the default scheduled task is defrag.exe –c –h –k which means it is doing a slab consolidation of all the volumes with a normal priority level (default being Low). To explain Slab Consolidation, you need to understand Storage Optimization enhancements in Windows Server 2012 which has been explained in this blog.

So what does Storage Optimizer do?

The Storage Optimizer in Windows 8/Server 2012 , also takes care of maintenance activities like compacting data and compaction of file system allocation for enabling capacity reclamation on thinly provisioned disks. This is again platform specific, so if your storage platform supports it, Storage Optimizer will consolidate lightly used ‘slabs’ of storage and release those freed storage ‘slabs’ back to your storage pool for use by other Spaces or LUNs. This activity is done on a periodic basis i.e., without any user intervention and completes the scheduled task provided it is not interrupted by the user. I am not getting into storage spaces and storage pools as this will further lengthen this topic, you can refer TechNet regarding Storage Spaces overviewfor details.

This is how Storage Optimizer looks like:

clip_image006

This is how it looks like after I click Analyze

clip_image007

For a thin-provisioned storage, this is how it looks like:

clip_image008

The fragmentation percentage showing above is file level fragmentation, NOT to be confused with storage optimization. In other words, if I click on the Optimize option, it will do storage optimization depending on the media type. In Fig 5., you might observe fragmentation on volume E: and F: (I manually created file system fragmentation). If I manually run a defrag.exe–d (Traditional defrag) in addition with the default –o (Perform optimization), they won’t contradict each other as Storage Optimization and Slab Consolidation doesn’t work at a file system level like the traditional defrag used to do. These options actually show their potential in hybrid storage environments consisting of Storage spaces, pools, tiered storage etc. Hence, in brief the default scheduled task for running Defrag in Server 2012 and Server 2012 R2 does not do a traditional defrag job (defragmentation at a file system level) which used to happen in Windows Server 2008/2008R2. To do the traditional defragmentation of these volumes, one needs to run defrag.exe –d and before you do that, ensure that if it will be at all required or not.

Q. So why did we stop the default file system defragmentation or defrag.exe -d?

A. Simple, it didn’t justify the cost and effort to run a traditional file system defragmentation as a scheduled task once every week. When we talk about storage solutions having terabytes of data, running a traditional defrag (default file system defragmentation) involves a long time and also affects the server’s overall performance.

What changed in Server 2012 R2:

The only addition in Windows Server 2012R2 is the below switch:

/G     Optimize the storage tiers on the specified volumes.

Storage Tiers allow for use of SSD and hard drive storage within the same storage pool as a new feature in Windows Server 2012 R2. This new switch allows optimization in a tiered layout. To read more about Tiered Storage and how it is implemented, please refer to these articles:

Storage Spaces: How to configure Storage Tiers with Windows Server 2012 R2
http://blogs.technet.com/b/askpfeplat/archive/2013/10/21/storage-spaces-how-to-configure-storage-tiers-with-windows-server-2012-r2.aspx

What's New in Storage Spaces in Windows Server 2012 R2
http://technet.microsoft.com/en-us/library/dn387076.aspx

Summary:

In brief, we need to keep these things in mind:

1. The default scheduled task for defrag is as follows:

Ø Windows Server 2008R2: defrag.exe –c

Ø Windows Server 2012: defrag.exe –c –h –k

Ø Windows Server 2012 R2: defrag.exe –c –h –k –g

On a client machine it will be Windows –c –h –ohowever, if there is a thin provisioned media present, defrag will do slab consolidation as well.

2. The command line –c –h –k (for 2012) and –c –h –k –g (for 2012R2) for the defrag task will perform storage optimization and slab consolidation on thin provisioned media as well. Different virtualization platforms may report things differently, like Hyper-V shows the Media Type as Thin Provisioned, but VMware shows it as a Hard disk drive. The fragmentation percentage shown in the defrag UI has nothing to do with slab consolidation. It refers to the file fragmentation of the volume.  If you want to address file fragmentation you must run defrag with –d(already mentioned before)

3. If you are planning to deploy a PowerShell script to achieve the same, the command is simple.

PS C:\> Optimize-Volume -DriveLetter <drive letter name> -Defrag -Verbose

Details of all PowerShell cmdlets can be found here.

That’s all for today, till next time.

Palash Acharyya
Support Escalation Engineer
Microsoft Platforms Support

LIVE: Microsoft Virtual Federal Forum

$
0
0

For the first time, the 2014 Virtual Federal Forum, will be streamed LIVE from the Reagan Center in Washington, DC! This online digital experience will be completely hybrid, focused on Real Impact for a Lean and Modern Federal Governmentand will showcase innovative and cost effective solutions to unleash greater capabilities within agencies, while helping simplify and modernize processes. The Virtual Federal Forum is designed exclusively for the Federal government community providing the opportunity to hear from Microsoft executives, thought leaders, and strategic partners.  Virtual attendees get bonus material not available to the in-person audience and have the ability to download related session materials, take live polls and surveys, share ideas and ask questions to experts and executives through Chat, Twitter and Q&A sessions.

Date: Tuesday, March 4th
Time: 8am EST – 2:30pm EST

Agenda Highlights:

· Keynote speaker, The Honorable Tom Ridge Former Secretary of the U.S. Department of Homeland Security, will be speaking on The Global Mission to Secure Cyberspace and will be available to virtual attendees for Live Q&A.

· Hear from top government agencies in a special customer panel; Veteran Affairs, the U.S. Navy and the Environmental Protection Agency discuss real world lessons learned and technology innovations.

· Learn how to leverage a next generation mobile workforce for a 21st Century government with live demos and best practices from Jane Boulware, VP US Windows.

· Senior Director of Microsoft’s Institute for Advanced Technology in Governments talks “Rethinking cyber defense…lessons learned from Microsoft’s own experience.

Other featured speakers would include:

Greg Myers - Vice President: Federal
Walter Puschner - Vice President: User Experience IT
Vaughn Noga - Acting Principal Deputy Assistant Administrator for Environmental Information
Captain Scott Langley - USN, MCSE CEH CISSP, Commander Navy Reserve Forces Command N6/CTO
Maureen Ellenberger - Veterans Relationship Management, Program Manager, Veteran Affairs
Dave Aucsmith - Microsoft’s Institute for Advanced Technology in Government

Register for this event using this unique URL.

Thank you in advance and we look forward to your participation at the Virtual Federal Forum!

Configuring Windows Failover Cluster Networks

$
0
0

In this blog, I will discuss the overall general practices to be considered when configuring networks in Failover Clusters.

Avoid single points of failure:

Identifying single points of failure and configuring redundancy at every point in the network is very critical to maintain high availability. Redundancy can be maintained by using multiple independent networks or by using NIC Teaming. Several ways of achieving this would be:

· Use multiple physical network adapter cards. Multiple ports of the same multiport card or backplane used for networks introduces a single point of failure.

· Connect network adapter cards to different independent switches. Multiple Vlans patched into a single switch introduces a single point of failure.

· Use of NIC teaming for non-redundant networks, such as client connection, intra-cluster communication, CSV, and Live Migration. In the event of a failure of the current active network card will have the communication move over to the other card in the team.

· Using different types of network adapters eliminates affecting connectivity across all network adapters at the same time if there is an issue with the NIC driver.

· Ensure upstream network resiliency to eliminate a single point of failure between multiple networks.

· The Failover Clustering network driver detects networks on the system by their logical subnet. It is not recommended to assign more than one network adapter per subnet, including IPV6 Link local, as only one card would be used by Cluster and the other ignored.

Network Binding Order:

The Adapters and Bindingstab lists the connections in the order in which the connections are accessed by network services. The order of these connections reflects the order in which generic TCP/IP calls/packets are sent on to the wire.

How to change the binding order of network adapters

  1. Click Start, click Run, type ncpa.cpl, and then click OK. You can see the available connections in the LAN and High-Speed Internet section of the Network Connections window.
  2. On the Advanced menu, click Advanced Settings, and then click the Adapters and Bindings tab.
  3. In the Connections area, select the connection that you want to move higher in the list. Use the arrow buttons to move the connection. As a general rule, the card that talks to the network (domain connectivity, routing to other networks, etc should the first bound (top of the list) card.

Cluster nodes are multi-homed systems.  Network priority affects DNS Client for outbound network connectivity.  Network adapters used for client communication should be at the top in the binding order.  Non-routed networks can be placed at lower priority.  In Windows Server 2012/2012R2, the Cluster Network Driver (NETFT.SYS) adapter is automatically placed at the bottom in the binding order list.

Cluster Network Roles:

Cluster networks are automatically created for all logical subnets connected to all nodes in the Cluster.  Each network adapter card connected to a common subnet will be listed in Failover Cluster Manager.  Cluster networks can be configured for different uses.

Name

Value

Description

Disabled for Cluster Communication

0

No cluster communication of any kind sent over this network

Enabled for Cluster Communication only

1

Internal cluster communication and CSV traffic can be sent over this network

Enabled for client and cluster communication

3

Cluster IP Address resources can be created on this network for clients to connect to. Internal and CSV traffic can be sent over this network

Automatic configuration

The Network roles are automatically configured during cluster creation. The above table describes the networks that are configured in a cluster.

Networks used for ISCSI communication with ISCSI software initiators is automatically disabled for Cluster communication (Do not allow cluster network communication on this network).

Networks configured without default gateway is automatically enabled for cluster communication only (Allow cluster network communication on this network).

Network configured with default gateway is automatically enabled for client and cluster communication (Allow cluster network communication on this network, Allow clients to connect through this network).

Manual configuration

Though the cluster networks are automatically configured while creating the cluster as described above, they can also be manually configured based on the requirements in the environment.

To modify the network settings for a Failover Cluster:

· Open Failover Cluster Manager

· Expand Networks.

· Right-click the network that you want to modify settings for, and then click Properties.

· If needed, change the name of the network.

· Select one of the following options:

o Allow cluster network communication on this network.  If you select this option and you want the network to be used by the nodes only (not clients), clear Allow clients to connect through this network. Otherwise, make sure it is selected.

o Do not allow cluster network communication on this network.  Select this option if you are using a network only for iSCSI (communication with storage) or only for backup. (These are among the most common reasons for selecting this option.)

Cluster network roles can also be changed using PowerShell command, Get-ClusterNetwork.

For example:

(Get-Cluster Network “Cluster Network 1”). Role =3

This configures “Cluster Network 1” to be enabled for client and cluster communication.

Configuring Quality of Service Policies in Windows 2012/2012R2:

To achieve Quality of Service, we can either have multiple network cards or used, QoS policies with multiple VLANs can be created.

QoS Prioritization is recommended to configure on all cluster deployments. Heartbeats and Intra-cluster communication are sensitive to latency and configuring a QoS Priority Flow Control policy helps reduce the latency.

An example of setting cluster heartbeating and intra-node communication to be the highest priority traffic would be:

New-NetQosPolicy “Cluster”-Cluster –Priority 6
New-NetQosPolicy “SMB” –SMB –Priority 5
New-NetQosPolicy “Live Migration” –LiveMigration –Priority 3

Note:

Available values are 0 – 6

Must be enabled on all the nodes in the cluster and the physical network switch

Undefined traffic is of priority 0

Bandwidth Allocation:

It is recommended to configure Relative Minimum Bandwidth SMB policy on CSV deployments

Example of setting minimum policy of cluster for 30%, Live migration for 20%, and SMB Traffic for 50% of the total bandwidth.

New-NetQosPolicy “Cluster” –Cluster –MinBandwidthWeightAction 30
New-NetQosPolicy “Live Migration” –LiveMigration –MinBandwidthWeightAction 20
New-NetQosPolicy “SMB” –SMB –MinBandwidthWeightAction 50

Multi-Subnet Clusters:

Failover Clustering supports having nodes reside in different IP Subnets. Cluster Shared Volumes (CSV) in Windows Server 2012 as well as SQL Server 2012 support multi-subnet Clusters.

Typically, the general rule has been to have one network per role it will provide. Cluster networks would be configured with the following in mind.

Client connectivity

Client connectivity is used for the applications running on the cluster nodes to communicate with the client systems. This network can be configured with statically assigned IPv4, IPv6 or DHCP assigned IP addresses. APIPA addresses should not be used as will be ignored networks as the Cluster Virtual Network Adapter will be on those address schemes. IPV6 Stateless address auto configuration can be used, but keep in mind that DHCPv6 addresses are not supported for clustered IP address resources. These networks are also typically a routable network with a Default Gateway.

CSV Network for Storage I/O Redirection.

You would want this network if using as a Hyper-V Cluster and highly available virtual machines. This network is used for the NTFS Metadata Updates to a Cluster Shared Volume (CSV) file system. These should be lightweight and infrequent unless there are communication related events getting to the storage.

In the case of CSV I/O redirection, latency on this network can slow down the storage I/O performance. Quality of Service is important for this network. In case of failure in a storage path between any nodes or the storage, all I/O will be redirected over the network to a node that still has the connectivity for it to commit the data. All I/O is forwarded, via SMB, over the network which is why network bandwidth is important.

Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks need to be enabled to support Server Message Block (SMB) which is required for CSV. Configuring this network not to register with DNS is recommended as it will not use any name resolution. The CSV Network will use NTLM Authentication for its connectivity between the nodes.

CSV communication will take advantage of the SMB 3.0 features such as SMB multi-channel and SMB Direct to allow streaming of traffic across multiple networks to deliver improved I/O performance for its I/O redirection.

By default, the cluster will automatically choose the NIC to be used for CSV for manual configuration refer the following article.

Designating a Preferred Network for Cluster Shared Volumes Communication
http://technet.microsoft.com/en-us/library/ff182335(WS.10).aspx

This network should be configured for Cluster Communications

Live Migration Network

As with the CSV network, you would want this network if using as a Hyper-V Cluster and highly available virtual machines. The Live Migration network is used for live migrating Virtual machines between cluster nodes. Configure this network as Cluster communications only network. By default, Cluster will automatically choose the NIC for Live migration.

Multiple networks can be selected for live migration depending on the workload and performance. It will take advantage of the SMB 3.0 feature SMB Direct to allow migrations of virtual machines to be done at a much quicker pace.

ISCSI Network:

If you are using ISCSI Storage and using the network to get to it, it is recommended that the iSCSI Storage fabric have a dedicated and isolated network. This network should be disabled for Cluster communications so that the network is dedicated to only storage related traffic.

This prevents intra-cluster communication as well as CSV traffic from flowing over same network. During the creation of the Cluster, ISCSI traffic will be detected and the network will be disabled from Cluster use. This network should set to lowest in the binding order.

As with all storage networks, you should configure multiple cards to allow the redundancy with MPIO. Using the Microsoft provided in-box teaming drivers, network card teaming is now supported in Win2012 with iSCSI.

Heartbeat communication and Intra-Cluster communication

Heartbeat communication is used for the Health monitoring between the nodes to detect node failures. Heartbeat packets are Lightweight (134 bytes) in nature and sensitive to latency. If the cluster heartbeats are delayed by a Saturated NIC, blocked due to firewalls, etc, it could cause the cluster node to be removed from Cluster membership.

Intra-Cluster communication is executed to update the cluster database across all the nodes any cluster state changes. Clustering is a distributed synchronous system. Latency in this network could slow down cluster state changes.

IPv6 is the preferred network as it is more reliable and faster than IPv4. IPv6 linklocal (fe80) works for this network.

In Windows Clusters, Heartbeat thresholds are increased as a default for Hyper-V Clusters.

The default value changes when the first VM is clustered.

Cluster Property

Default

Hyper-V Default

SameSubnetThreshold

5

10

CrossSubnetThreshold

5

20

Generally, heartbeat thresholds are modified after the Cluster creation. If there is a requirement to increase the threshold values, this can be done in production times and will take effect immediately.

Configuring full mesh heartbeat

The Cluster Virtual Network Driver (NetFT.SYS) builds routes between the nodes based on the Cluster property PlumbAllCrossSubnetRoutes.

Value Description

0     Do not attempt to find cross subnet routes if local routes are found

1     Always attempt to find routes that cross subnets

2     Disable the cluster service from attempting to discover cross subnet routes after node successfully joins.

To make a change to this property, you can use the command:

(Get-Cluster). PlumbAllCrossSubnetRoutes = 1

References for configuring Networks for Exchange 2013 and SQL 2012 on Failover Clusters.

Exchange server 2013 Configuring DAG Networks.
http://technet.microsoft.com/en-us/library/dd298065(v=exchg.150).aspx

Before Installing Failover Clustering for SQL Server 2012
http://msdn.microsoft.com/en-us/library/ms189910.aspx

At TechEd North America 2013, there was a session that Elden Christensen (Failover Cluster Program Manager) did that was entitled Failover Cluster Networking Essentials that goes over a lot of configurations, best practices etc.

Failover Cluster Networking Essentials
http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/MDC-B337#fbid=ZpvM0cLRvyX

S. Jayaprakash
Senior Support Escalation Engineer
Microsoft India GTSC

Nodes being removed from Failover Cluster membership on VMWare ESX?

$
0
0

Welcome to the AskCore blog. Today, we are going to talk about nodes being removed from active Failover Cluster membership when the nodes are hosted on VMWare ESX. I have documented node membership problems in a previous blog:

Having a problem with nodes being removed from active Failover Cluster membership?http://blogs.technet.com/b/askcore/archive/2012/02/08/having-a-problem-with-nodes-being-removed-from-active-failover-cluster-membership.aspx

This is a sample of the event you will see in the System Event Log in Event Viewer:

image

One specific problem that I have seen a few times lately is with the VMXNET3 adapters dropping inbound network packets because the inbound buffer is set too low to handle large amounts of traffic. We can easily find out if this is a problem by using Performance Monitor to look at the “Network Interface\Packets Received Discarded” counter.

image

Once you have added this counter, look at the Average, Minimum and Maximum numbers and if they are any value higher than zero, then the receive buffer needs to be adjusted up for the adapter. This problem is documented in VMWare’s Knowledge Base:

Large packet loss at the guest OS level on the VMXNET3 vNIC in ESXi 5.x / 4.xhttp://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2039495

I hope that this post helps you!

Thanks,

James BurrageSenior Support Escalation EngineerWindows High Availability Group

What’s New in Windows Servicing: Part 1

$
0
0

My name is Aditya and I am a Senior Support Escalation Engineer for Microsoft on the Windows Core Team. I am writing today to shed some light on a the new changes that have been made to the Windows Servicing Stack in Windows 8.1 and Windows Server 2012 R2. This is a 4 part series and this is the first one:

Windows 8.1 brings in a lot of new features to improve stability, reduce space usage and keep your machine up to date. This blog series will talk about each of these new features in detail and talk about some of the troubleshooting steps you will follow when you run into a servicing issue.

What is Servicing and The Servicing Stack: Windows Vista onwards use a mechanism called Servicing to manage operating system components, rather than the INF-based installation methods used by previous Windows versions. With Windows Vista and Windows Server 2008, component-based builds use images to deploy component stores to the target machine rather than individual files. This design allows installation of additional features and fixes without prompting for media, enables building different operating system versions quickly and easily, and streamlines all operating system servicing.

Within the servicing model, the update process for Vista+ Operating systems represents a significant advance over the update.exe model used in previous operating systems. Although update.exe had many positive features, it also had numerous issues, the foremost of which was the requirement to ship update.exe engine with each package.

Servicing is simplified by including the update engine, in the form of the servicing stack, as part of the operating system. The servicing stack files are located in the C:\Windows\WINSxs folder.

image

This folder can grow very large on Windows 2008 and Windows 2008 R2 system and more information can be found at why this happens at :

What is the WINSXS directory in Windows 2008 and Windows Vista and why is it so large?
http://blogs.technet.com/b/askcore/archive/2008/09/17/what-is-the-winsxs-directory-in-windows-2008-and-windows-vista-and-why-is-it-so-large.aspx

What’s new in Windows 8.1 and Server 2012 R2:

1. Component Store Analysis Tool:

A new feature has been added to the DISM command that will allow users to get detailed information on the contents of the Component Store (WinSxS folder).

There have been many users, mainly power users and IT Admins of Windows, who have raised concerns around the size of the WinSxS store and why it occupies so much space on the system. These users also have complaints about the size of WinSxS growing in size over time and are curious to know how its size can be reduced. A lot of users have questioned what happens if the WinSxS store is deleted completely. There have been multiple attempts in the past to explain what the WinSxS store contains and what the actual size of the WinSxS store is. For this OS release, a reporting tool has been created that a power user can run to find out the actual size of the WinSxS store as well as get more information about the contents of the Store. This is in addition to the article we will be publishing for users to understand how the WinSxS is structured, and what the actual size is as compared to the perceived size of this store.

The purpose of this feature is two-fold. First, is to educate power users and IT Admins of Windows about what WinSxS is, what it contains and its importance to the overall functioning of the OS. Second, this feature will deliver a tool via the DISM functionality to analyze and report a specific set of information about the WinSxS store for power users.

From various forums and blog posts, there seem to be two main questions that users have:

· Why is WinSxS so massive?

· Is it possible to delete WinSxS in part or completely?

In addition to this, OEMs do have questions about how they can clean up unwanted package store, servicing logs, etc. from the image.

Based on these questions, we felt that the most important metric for our tool would be the actual size of WinSxS. Secondly, it would be good to report packages that are reclaimable so that a user can startcomponentcleanup to scavenge them. Lastly, for devices like the Microsoft Surface, which remain on connected standby, it may be possible that the system never scavenged the image. In that case, considering that these tablets have small disk sizes, it becomes important to let users know when it was last scavenged and whether scavenging is recommended for their device.

We expect the amount of time for completion of the analysis to be somewhere between 40 and 90 seconds on a live system. In this scenario, there needs to be some indication of progress made visible to the user. We will use the existing progress UI of DISM to indicate the % of analysis completed to the user. The user will also get the option to cancel out of the operation through the progress UI.

The following steps describe the end to end flow of using the component store analysis tool:

· The user launches an elevated command prompt by typing Command Prompt on the Start screen.

· The user types in the DISM command:

Dism.exe /Online /Cleanup-image /AnalyzeComponentStore

At the end of the scan, the user gets a report of the results like this:

image

2. Component Store Cleanup:

The Component Store Cleanup functionality is one of several features aimed at reducing the overall footprint and footprint growth of the servicing stack. Reducing the footprint of Windows is important for many reasons, including providing end users more available disk capacity for their own files, and improving performance for deployment scenarios.

Component Store Cleanup in Windows 8 was integrated into the Disk Cleanup Wizard. It performs a number of tasks, including removing update packages that contain only superseded components, and compressing un-projected files (such as optional components, servicing stack components, etc.). For Windows 8.1, we will add the capability to perform deep clean operations without requiring a reboot.

Today, Component Store Cleanup must be triggered manually by an end-user, either by running DISM, or by using the Disk Cleanup Wizard. In order to make Component Store Cleanup more useful for the average end-user, it will be added into a maintenance task, automatically saving disk space for end-users. To enable this, a change will be made to allow uninstallation of superseded inbox drivers without requiring a reboot (today, all driver installs/uninstalls done by CBS require a reboot).

The superseded package removal feature of deep clean attempts to maintain foot print parity between a computer that has been serviced regularly over time vs. a computer that has been clean installed and updated.

2.1. How can Component Store Cleanup be initiated?

Component Store Cleanup will support being initiated in the below 3 ways:

1. Dism.exe /online /Cleanup-Image /StartComponentCleanup

clip_image002

2. Disk cleanup wizard :

a. To open Disk Cleanup from the desktop, swipe in from the right edge of the screen, tap Settings (or if you're using a mouse, point to the lower-right corner of the screen, move the mouse pointer up, and then click Settings), tap or click Control Panel, type Admin in the Search box, tap or click Administrative Tools, and then double-tap or double-click Disk Cleanup.

b. In the Drives list, choose the drive you want to clean, and then tap or click OK.

c. In the Disk Cleanup dialog, select the checkboxes for the file types that you want to delete, tap or click OK, and then tap or click Delete files.

d. To delete system files:

i. In the Drives list, tap or click the drive that you want to clean up, and then tap or click OK.

ii. In the Disk Cleanup dialog box, tap or click Clean up system files. clip_image003 You might be asked for an admin password or to confirm your choice.

clip_image005

c. In the Drives list, choose the drive you want to clean, and then tap or click OK.

d. In the Disk Cleanup dialog box, select the checkboxes for the file types you want to delete, tap or click OK, and then tap or click Delete files.

clip_image006

e. Automatic from a scheduled task:

i. If Task Scheduler is not open, start the Task Scheduler. For more information, see Start Task Scheduler.

ii. Expand the console tree and navigate to Task Scheduler Library\Microsoft\Windows\Servicing\StartComponentCleanup.

iii. Under Selected Item, click Run

clip_image008

The StartComponentCleanup task can also be started from the command line:

schtasks.exe /Run /TN "\Microsoft\Windows\Servicing\StartComponentCleanup"

For all three methods, an automatic scavenge will be performed after the disk cleanup in order to immediately reduce the disk footprint. When scavenge is performed for option 1, NTFS compression will not be used since it has a negative impact on capture and apply times, but Delta Compression will be used since it will help with both capture and apply. When run automatically for option 3, deep clean and the scavenge operation will be interruptible in order to maintain system responsiveness.

2.2. What does Component Store Cleanup do?

During automatic Component Store Cleanup, packages will be removed if the following criteria apply:

§ All components in package are in superseded state

§ Packages are not of an excluded class (permanent, LP, SP, foundation)

§ Package is older than defined age threshold

· Only packages that have been superseded for a specified number of days (default of 30 days) will be removed by the automated deep clean task. In order maintain user responsiveness automatic Component Store Cleanup will perform package uninstall operations one at a time, checking to see if a stop has been requested in between each package.

· The Component Store Cleanup maintenance task will be incorporated into the component platform scavenging maintenance task. This task runs every 1 week, with a deadline of 2 weeks. This ensures that scavenging and deep clean processing happens relatively quickly after patches are released on patch Tuesday.

Manual Component Store Cleanup

During manual Component Store Cleanup, packages will be removed if the following criteria apply:

· All components in package are in superseded state

· Packages are not of an excluded class (permanent, LP, SP, foundation)

The functionality for manual Component Store Cleanup largely already exists in Win8. To improve performance, manual deep clean will perform all package uninstall operations in a single KTM transaction, and is not interruptible. Superseded packages are not subject to an age limit. Instead they are removed immediately.

The next blog in the series we will discuss more about Delta Compression & Single Instancing…

Aditya
Senior Support Escalation Engineer
Microsoft Platforms Support


Windows XP support ending April 8, 2014

$
0
0

I’m sure you already know this, but if you don’t, XP Support Ends on April 8th, 2014.  That is 15 days from this post.  Below are a couple of links that will give you more information on moving forward:

image

<SNIP from the second link above>

As a result, after April 8, 2014, technical assistance for Windows XP will no longer be available, including automatic updates that help protect your PC. Microsoft will also stop providing Microsoft Security Essentialsfor download on Windows XP on this date. (If you already have Microsoft Security Essentials installed, you will continue to receive antimalware signature updates for a limited time, but this does not mean that your PC will be secure because Microsoft will no longer be providing security updates to help protect your PC.)

If you continue to use Windows XP after support ends, your computer will still work but it might become more vulnerable to security risks and viruses. Also, as more software and hardware manufacturers continue to optimize for more recent versions of Windows, you can expect to encounter greater numbers of apps and devices that do not work with Windows XP.

</END SNIP>

Windows main page

John Marlin
Senior Support Escalation Engineer
Microsoft Global Business Support

Failover Clustering and Active Directory Integration

$
0
0

My name is Ram Malkani and I am a Support Escalation Engineer on Microsoft’s Windows Core team. I am writing to discuss how Failover Clustering is integrated with Active Directory on Windows Servers.

Windows Server Failover Clustering, has always had a very strong and cohesive attachment with the Active Directory. We made considerable changes to how Failover Clustering integrates with AD DS, as we made progression to new versions of Clusters running on Windows Servers. Let us see the story so far:

Window Server 2003 and previous version.

Windows Server 2008, 2008 R2

Windows Server 2012

We needed a Cluster Service Account (CSA). A domain user, whose credentials were used for the Cluster service and the Clustered resources. This had its problems, changing the password for the account, rotating the passwords, etc. Later, we did add support for Windows Clusters on 2003 to use Kerberos Authentication which created objects in Active Directory.

We moved away from CSA, and instead, the Cluster started the use of Active Directory computer objects associated with the Cluster Name resource (CNO) and Virtual Computer Objects (VCOs) for other network names in the cluster. When cluster is created, the logged on user needed permissions to create the computer objects in AD DS, or you would ask the Active Directory administrator to pre-stage the computer object(s) in AD DS. Cluster communications between nodes also uses AD authentication.

The same information provided for Windows 2008 and 2008R2 applies, however, we included a feature improvement to allow Cluster nodes to come up when AD is unavailable for authentication and allow Cluster Shared Volumes (CSVs) to become available and the VMs (potentially Domain Controllers) on it to start. This was a major issue as otherwise we had to have at least one available Domain Controller outside the cluster before the Cluster Service could start.

 

What’s new with Clustering in Windows Server 2012 R2

We have introduced, a new mode to create a Failover Cluster on Windows Server 2012 R2, known as Active Directory detached Cluster. Using this mode, you would not only no longer need to pre-stage these objects but also stop worrying about the management and maintenance of these objects. Cluster Administrators would no longer need to be wary about accidental deletions of the CNO or the Virtual Computer Objects (VCOs). The CNOs and VCOs are now instead created in Domain Name System (DNS).

This feature provides greater flexibility when creating a Failover Cluster and enables you to choose to install Clusters with or without AD integration. It also improves the overall resiliency of cluster by reducing the dependencies on CNO and VCOs, thereby reducing the points of failure on the cluster.

The intra-cluster communication would continue to use Kerberos for authentication, however, the authentication of the CNO would be done using NT LM authentication. Thus, you need to remember that for all Cluster roles that need Kerberos Authentication use of AD-detached cluster is not recommended.

Before installing an Active Directory-detached Cluster, considerations must be made on what role will be on the Cluster.  Not all roles will work on a Cluster in this configuration. For details on cluster roles that are not recommended or unsupported for AD detached Clusters, please read:

Deploy an Active Directory-Detached Cluster
http://technet.microsoft.com/en-us/library/dn265970.aspx

Installing Active Directory detached Cluster

First, you should make sure that the nodes, running Windows Server 2012 R2 that you are intending to add to the cluster are part of the same domain, and proceed to install the Failover-Cluster feature on them. This is very similar to conventional Cluster installs running on Windows Servers. To install the feature, you can use the Server Manager to complete the installation.

Server Manager can be used to install the Failover Clustering feature:

Introducing Server Manager in Windows Server 2012
http://blogs.technet.com/b/askcore/archive/2012/11/04/introducing-server-manager-in-windows-server-2012.aspx

We can alternatively use PowerShell (Admin) to install the Failover Clustering feature on the nodes.

Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools

An important point to note is that PowerShell Cmdlet ‘Add-WindowsFeature’ is being replaced by ‘Install-WindowsFeature’ in Windows Server 2012 R2. PowerShell does not install the management tools for the feature requested unless you specify  ‘-IncludeManagementTools’ as part of your command. 

image

 

BONUS READ:
The Cluster Command line tool (CLUSTER.EXE) has been deprecated; but, if you still want to install it, it is available under:
Remote Server Administration Tools --> Feature Administration Tools --> Failover Clustering Tools --> Failover Cluster Command Interface in the Server Manager

image

The PowerShell (Admin) equivalent to install it:

Install-WindowsFeature -Name RSAT-Clustering-CmdInterface

Now that we have Failover Clustering feature installed on our nodes. Ensure that all connected hardware to the nodes passes the Cluster Validation tests. Let us now go on to create our cluster. You cannot create an AD detached clustering from Cluster Administrator and the only way to create the AD-Detached Cluster is by using PowerShell.

New-Cluster MyCluster -Node My2012R2-N1,My2012R2-N2 -StaticAddress 192.168.1.15 -NoStorage -AdministrativeAccessPoint DNS

image

NOTE:
In my example above, I am using static IP Addresses, so one would need to be specified.  If you are using DHCP for addresses, the switch “-StaticAddress 192.168.1.15” would be excluded from the command.


Once we have executed the command, we would have a new cluster created with the name “MyCluster” with two nodes “My2012R2-N1” and “My2012R2-N2”. When you look Active Directory, there will not be a computer object created for the Cluster “MyCluster”; however, you would see the record as the Access Point in DNS.

image

 

That’s it! Thank you for your time.

Ram Malkani
Support Escalation Engineer
Windows Core Team

What’s New in Windows Servicing: Reduction of Windows Footprint : Part 2

$
0
0

My name is Aditya and I am a Sr. Support Escalation Engineer for Microsoft on the Windows Core Team. This blog is a continuation of the previous Servicing Part 1. So to understand this blog better, it is recommended that one reads the previous blog post.  As mentioned in the previous, this is a 4 part Blog series on Windows Servicing.

What’s New in Windows Servicing: Part 1
What’s New in Windows Servicing: Reduction of Windows Footprint : Part 2

Before we dive into Single Instancing and Delta Compression, I thought it would be a good idea, to talk about why this was introduced and how it worked in the previous Operating Systems. The reason for both Single Instancing and Delta Compression was to reduce the Windows (Windows 8.1 and Windows Server 2012 R2) footprint. Here is how and why:

Windows Footprint Reduction Features:The disk footprint of a Windows directly affects end-users, as it reduces the amount of available space for music, videos, pictures, and all other content. Even as we shift more user content to the cloud, factors such as high-resolution photos and videos, limited and costly bandwidth, and safety/security concerns over cloud storage mean that local storage requirements would remain constant for the next few years.

The disk footprint of Windows also directly affects our OEM partners. Available storage is one of the most important metrics, that an end-user looks at when purchasing a system, and OEMs are pushed to provide higher storage capacity. The current trend is that many of the OEM’s are shifting to SSD storage, due to its small footprint (enabling smaller, sleeker devices), low power consumption, low noise, and improved performance. Unfortunately, SSD storage can cost as much as 10x the price of conventional spindle based storage, which means that OEMs can only add limited storage to their systems before the cost becomes too great.

If Windows consumes less of the available disk footprint, while still providing a great end-user experience, this provides end-users with more disk space for their content, without requiring the OEM to spend more on storage, thus reducing the price of PCs.

For rollback purposes, the previous versions of Windows Components are sometimes kept in WinSxS store after installation of new updates through Windows Updates. The MUM Servicing feature which was introduced in Windows 7 and Windows Server 2008 R2 ensures that the disk space growth due to GDR installations can be reclaimed after a Service Pack (SP) installation by running Disk Cleanup Utility manually.

Windows strives to constrain servicing footprint growth, which are due to GDR installations either before or after a SP installation. The feature also focuses on enabling the servicing footprint reduction support at the Component Based Servicing technology level that targets for the following scenarios:

1. Consumers opt in for automatic updates on their Windows 8 devices, and notice that the WinSxS store footprint no longer grows significantly over time.

2. Consumers notice that the WinSxS store footprint has grown due to update installations over time, and then run Disk Cleanup Utility to reduce the WinSxS store footprint and reclaim disk space on their devices.

3. OEMs service their golden images in technician labs over time to keep them up-to-date and secured. Before the image is delivered to ODM for deployment at factory floor, they clean up the image by running DISM to scavenge away all the superseded components and recapture the smaller sized image.

4. Similarly, IT Admins service master images in their image libraries to keep them up-to-date and secured. Before the image is ready for deployment to Client machines, they clean up the image by running DISM to scavenge away all the superseded components and recapture the smaller sized image.

The feature to reduce the disk space used by Windows with the focus on Windows Components. Windows Update routinely installs patches on released Windows machines but does not always remove previous content that is replaced by the patches and which are not in use anymore. The purpose of this feature is to reduce disk footprint growth over time and also to provide a means by which power-users can reduce the original disk footprint of Windows.

This feature reduces disk footprint grown over time by uninstall and deletion of content that can be removed from the system and compression of unused content that may not be removable from the system

Reducing the footprint of Windows also improves deployment performance, which benefits consumers, Enterprise, and OEMs.

1. Single Instancing Catalogs: This feature contributes to the component store footprint reduction by single-instancing catalogs across the CATROOT, and Windows Servicing Stack stores. 

Term

Definition

Catroot

%windir%\system32\catroot

Servicing Stack Packages

%windir%\servicing\packages

Servicing Stack Catalogs

%windir%\winsxs\catalogs

 The redundant catalogs are single-instanced by hard-linking them across the three stores, nullifying the Windows Servicing Stack footprint overhead. To minimize impact to other catalog clients, changes were scoped to just those catalogs installed by the Servicing Stack.

More information on how hard-linking affects and works in the Windows Servicing Stack, one can refer to this TechNet article:

Manage the Component Store

2. Delta Compression of Superseded Components: Thisfeature contributes to the component store footprint reduction by significantly reducing the size of files that have been superseded by later updates, yet they remain on the computer in case the user needs to uninstall a recent update. 

Term

Definition

Component

The smallest serviceable unit that includes files, registry data, meta-data, and etc., that describes how to service that set of files, and etc.

Installed component - winner

This is the ‘winning’ version of a component in a component family. This is the payload that is projected into System32 (or whichever location is specified in the component manifest).

Installed component - superseded

These components are installed, but are older versions than the winning component. The payload exists in the component store, but does not get projected to System32. If the winning component is uninstalled, the highest versioned remaining component becomes the new winner.

Latent component

These components are available for installation under the proper circumstances, but are not currently installed. The most common form of a latent component is a component that belongs to an optional feature that is currently disabled.

Superseded components are kept in the component storein case a user uninstalls the winning component (by uninstalling an update, for example). End-users infrequently uninstall updates, making those updates a prime target for reclaiming space. This feature uses a type of compression known as delta compression to dramatically reduce the size of superseded and latent components.

Delta compressionis a technology based on the differencing of two similar files. One version is used as a baseline and another versions is expressed as baseline + deltas.

The delta compression is performed against the winner component at the time of compression. This means the deltas for a specific component is different from machine to machine, depending on which winner was available at the time of compression.

Let me explain this by use of the following diagram Figure 1, in which V1, V2, and V3are all installed components prior to compression. During compression, V1 and V2 are compared against V3, the current winner, to create the necessary deltas.

clip_image002

Figure 1

In the next example, refer Figure 2 below, where V1 and V2 are installed, with V2 being the winner. After compression, V1 delta is created using V2 as the basis. Subsequently, V3 is installed. After the next compression, V2 delta is created using V3 as the basis.

Figure 2

Decompression or Rehydration:If the winning component is uninstalled, Windows Servicing Stack decompresses any components that are using the uninstalled version as their baseline, and makes the next highest versioned component the new winner. The uninstalled version is marked for deletion, and later when the Servicing Stack’s maintenance task runs, the uninstalled version is deleted, and any remaining superseded files are compressed against the new winner. For example refer to figure 3 below.

Figure 3

There may be cases where a file needs to be decompressed, but its basis file is also compressed. In these case, the Windows Servicing Stack would decompress the full chain of files necessary to decompress the final winning file.

Figure 4

At this point the big question that comes to mind is When Do We Delta Compress Components? The answer is pretty simple, Delta compression of superseded and latent content in the component store happens as part of the Servicing Stacks maintenance task. This process can be triggered either manually, or automatically.

Manual maintenance:Manually triggered by dism.exe.

Dism /online /cleanup-image /startcomponentcleanup

Automatic maintenance: Triggered by a scheduled maintenance task when the system is idle.

Task Scheduler Library  -->  Microsoft  -->  Windows  -->  Servicing

The automatic case is interruptible and resume-able. It automatically stops when the computer is no longer idle, and resumes when it becomes idle again.

For more detailed information, please refer to What’s New in Windows Servicing: Part 1.

Definitions:

Term

Definition

Delta compression

Compressing a file by capturing a diff of the file against a basis file. Requires the basis file to decompress.

Backup directory

Directory containing copies of boot critical files that are used to repair corruption

Manifest

Files describing the contents of a component. Windows is essentially defined by component manifests, approximately 15,000 of them (on amd64).

I hope this blog would have helped in understanding the efforts, put in the background by all the Windows team, in order to reduce the size of WINSXS considerably in Windows 8.1 and Windows Server 2012 R2.

The next blog in the series we would be discussing about the Servicing Stack improvements in KB2821895for Windows 8, and why it will assist your upgrade to 8.1?? Till then happy reading….

Aditya
Senior Support Escalation Engineer
Microsoft Platforms Support

What’s New in Windows Servicing: Service Stack Improvements: Part 3

$
0
0

Servicing Stack improvements in KB2821895 for Windows 8, and How its assists the upgrade to 8.1?

My name is Aditya and I am a Sr. Support Escalation Engineer for Microsoft on the Windows Core Team. This blog is a continuation of the previous Servicing Part 1. So to understand this blog better, it is recommended that one reads the previous blog post.  As mentioned in the previous, this is a 4 part Blog series on Windows Servicing.

What’s New in Windows Servicing: Part 1
What’s New in Windows Servicing: Reduction of Windows Footprint : Part 2
What’s New in Windows Servicing: Service Stack Improvements: Part 3

This feature will back port Windows 8.1 features that reduce the disk footprint of the component store. Any freed space will be reserved for system use in upgrading to Windows 8.1.

As from the last blog, we discussed about the hard work put in by our Core Deployment Platform (CDP) team in terms of reducing the amount of free disk space required for small footprint devices. Even with these reductions, an upgrade requires at least 5 GB of free space.

To further reduce the perceived amount of space required, a Servicing Stack Update (SSU) for Windows 8 has been created that back ports Windows 8.1 Component Store Footprint Reduction features. It also introduces the maintenance task for controlling the footprint reductions and a set of Deep Clean operations. Any space freed by the maintenance task will be reserved for use by the Windows 8.1 upgrade process.

The below features were targeted for the down-level port:

1. Delta compression of the Component Store

2. Deep Clean, uninstall of superseded GDR packages

The features are used by the maintenance task to scavenge disk space. In addition to back porting these features, the servicing stack update must reserve free space for upgrade to Windows 8.1 Client. As we do not encourage upgrades of Windows Server 2012, this feature cannot reserve space on server SKUs, it is only for the Client SKU’s.

When we install Windows 8 (32bit), on a machine, and the check the size of the WINSXS, folder, we should see something like as shown in figure 1:

image

When we perform Windows Update for the first time, on the machine via the Control Panel Applet, we should have about 84 updates, which come up to about 515mb, as shown in figure 2:

image

After the reboot of the machine the WINSXS, folder sees a little growth in size, which is about 2gb, as shown in Figure 3:

image

Looking at the amount of space that has been taken up, after applying Windows Update, we should download and apply the update KB2821895.

image

After the update is installed, a maintenance task will run weekly and continue to reclaim disk space up to the time the machine is upgraded to Windows 8.1. It creates a temporary file equal to size of space saved by delta compression during the reduction of the footprint. This file is hidden and marked as OS file so, that it is not easily visible.

Location of reserve file is:
%windir%\winsxs\reserve.tmp
clip_image006

The size of this file is saved to registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SideBySide\Configuration\[reserve]”.This value is used to determine if the reserve file was created on machine and then deleted.

clip_image008

Note: Only Windows 8 SKUs that are capable of being upgraded through the Microsoft Store will have space reserved in the temp file.

During the Windows 8.1 store upgrade process, this file is deleted and the reclaimed disk space becomes free space which should ensure a successful upgrade to Windows 8.1.

New to WIN8.1 and Windows Server 2012 R2

Smart Pending Logic

This feature allows updates that do not require reboot to install immediately and does not require them to merge with updates being installed that require a reboot. It also decrease the time it takes to install updates during reboot since only the updates that require a reboot would be installed at that time.

Currently, when multiple updates are being applied to a system and one or more of the updates requires a reboot, all updates after the “first update that requires reboot” are installed during the reboot process.

In the current servicing stack design, Windows Servicing Stack passes a flag to the Servicing Infrastructure to pend the installation of a package if:

  • Any package is already pended
  • Pending.xml exists
  • PendingRequired flag is set in the Servicing Infrastructure store

The limitations with this design are:

  • After the packages are merged together and installation is attempted during reboot, failure caused by any one of those updates causes a failure for all other updates.
  • Our teams that design their components for reboot-less updating cannot gain any benefit of their design because of limitation in stack itself.
  • Because all pended updates are installed during machine reboot, the number of updates pended determines the non-interactive user time while installing the updates.

The current design that we have in Windows 8 and Server 2012, looks something like this:

image

With this new feature, Windows Servicing Stack would not check to see if a reboot is pending and will always try to install the update completely. The operational flow of the new design looks like this:

image

 

In Windows 8.1 and Server 2012 R2, updates that don’t require a reboot would be completely installed and only those that require a reboot are pended for installation during reboot. Smart pending logic applies to online servicing operations only.

Smart Pending exceptions:

The following types of packages are not going to be smart pended due to performance and reliability reasons:

  • Large packages, such as a service pack or language pack.
  • Special packages that cannot be merged with other packages.
  • Servicing stack updates.

The below diagram describes the logic:

image

I hope this blog would have helped in understanding the changes made to Windows 8 OS and the new features added to the Windows 8.1 OS, especially with the Smart Logic put in to make sure that we save more drive space.

The next blog in the series we would be discussing about the automated maintenance tasks to check for system file corruption, file system health, cleaning up unused drivers etc.?? Till then happy reading….

Aditya
Senior Support Escalation Engineer
Microsoft Platforms Support

Removing .NET Framework 4.5/4.5.1 removes Windows 2012/2012R2 UI and other features

$
0
0

This is Vimal Shekar and Krishnan Ayyer from the Windows Support team. Today in this blog, we will be discussing about an issue that we are seeing increasingly being reported in support. We will look at the effects of removing .Net Framework from a Windows Server 2012/2012 R2 installation.

Windows Server 2012 includes .NET Framework 4.5 and Windows Server 2012 R2 includes .NET Framework 4.5.1. The .NET Framework provides a comprehensive and consistent programming model to build and run applications (including Roles and Features) that are built for various platforms. Windows Explorer (Graphical Shell), Server Manager, Windows PowerShell, IIS, ASP .NET, Hyper-V, etc, are all dependent on .NET Framework. Since there are multiple OS components dependent on .Net Framework, this feature is installed by default.  Therefore, you do not have to install it separately.

It is not recommended to uninstall .NET Framework.  In some given circumstances, there may be a requirement to remove/re-install .Net Framework on Windows Server 2012/2012 R2.

When you uncheck the .NET Framework 4.5 checkbox in the Remove Roles/Features Wizard of Server Manager, Windows will check all roles/features that may also be installed as it would need to be removed as well..  If there are other roles or features dependent on .NET Framework, those would be listed in this additional window.

For Example:

image

 

If you read through the list, the components that are affected by this removal are listed as follows:

  1. .NET Framework 4.5 Features
  2. RSAT (Remote Administration Assessment Toolkit) which includes Hyper-V Management tools and Hyper-V GUI,
  3. User interfaces and Infrastructure, which includes Graphical Management Tools and Infrastructure Server Graphical Shell (Full Shell and min Shell),
  4. PowerShell which will remove complete PowerShell 4.0 and ISE

The list of components may differ depending upon the Roles and Features installed on the Server machine.  If you were to use DISM.EXE commands to remove .Net Feature, you may not even see such a list.  If you were to use PowerShell to remove .Net feature using the following command, you will not get the list.

Uninstall-WindowsFeature Net-Framework-45-Features

If you were to use Remove-WindowsFeature PowerShell cmdlet, you can add the –whatifswitch to see the list of features that would also be impacted.

Remove-WindowsFeature Net-Framework-45-Features –WhatIf

Unfortunately, we all get in a hurry sometimes and we do not read through the list and click “Remove Features”. If you notice – the “Server Graphical Shell” and “Graphical Management Tools and Infrastructure” are part of the features being removed.

Here is a sample output from running Remove-WindowsFeature Net-Framework-45-Features -WhatIf. Again you will see that removing .Net Framework will effectively also remove the following:

clip_image005

The two key features that I wanted to point out are:

[User Interfaces and Infrastructure] Server Graphical Shell

[User Interfaces and Infrastructure] User Interfaces and Infrastructure

As stated earlier, this will leave the server without a graphical shell for user interaction. Only the command prompt will be available post reboot.

If you get into this situation, run the below commands in the Server Core’s command prompt window to help you recover:

DISM.exe /online /enable-feature / all featurename:NetFx4
DISM.exe /online /enable-feature /all featurename:MicrosoftWindowsPowerShell

The above commands will re-install .Net 4.0 and PowerShell on the server. Once PowerShell is installed, you can add the Graphical Shell (Windows Explorer) using the following command:

Install-WindowsFeature Server-Gui-Shell, Server-Gui-Mgmt-Infra

Once the GUI Shell is installed, you will need to restart the server with the following command:

Restart-Computer

NOTE:

Remove-WindowsFeature and Uninstall-WindowsFeature are aliases.  The -whatif command shows what would occur if the command was run but does not execute the command.. 

We hope this information was helpful.

Vimal Shekar
Escalation Engineer
Microsoft Support

Krishnan S Ayyer
Technical Advisor
Microsoft Support

How to Configure MSDTC to Use a Specific Port in Windows Server 2012/2012R2

$
0
0

My name is Steven Graves and I am a Senior Support Escalation Engineer on the Windows Core Team.  In this blog, I will discuss how to configure MSDTC to use a specific port on Windows Server 2012/2012R2 as this has slightly changed from the way it is configured in Windows Server 2008 R2 in order to prevent overlapping ports.  As a reference, here is the blog for Windows 2008 R2.

How to configure the MSDTC service to listen on a specific RPC server port
http://blogs.msdn.com/b/distributedservices/archive/2012/01/16/how-to-configure-the-msdtc-service-to-listen-on-a-specific-rpc-server-port.aspx

Scenario

There is a web server in a perimeter network and a standalone SQL Server (or Clustered SQL Server instance) on a backend production network and a firewall that separates the networks. MSDTC needs to be configured between the web server and backend SQL Server using a specific port in order to limit the ports opened on the firewall between the networks.

So as an example, we will configure MSDTC to use port 5000.

There are two things that need to be configured on the frontend web server to restrict the ports that MSDTC will use.

  • Configure the ports DCOM can use
  • Configure the specific port or ports for MSDTC to use

Steps

1. On the web server launch Dcomcnfg.exefrom the Run menu.

2. Expand Component Services, right click My Computer and select Properties

clip_image002

3. Select the Default Protocols tab

clip_image004

4. Click Properties button

clip_image006

5. Click Add

6. Type in the port range that is above the port MSDTC will use. In this case, I will use ports 5001-6000.

7. Click OK back to My Computer properties window and click OK.  Here is the key that is modified in the Registry for the ephemeral ports.

clip_image008

8. Start Regedt32.exe

9. Locate HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSDTC

10. Right click the MSDTC key, select New and DWord (32-bit) Value

11. Type ServerTcpPort for the key name

12. Right click ServerTcpPort key and select Modify

13. Change radio button to Decimal and type 5000 in the value data, click OK.  This is how the registry key should look

clip_image010

14. Restart the MSDTC Service (if stand-alone) or take the MSDTC Resource offline/online in Failover Cluster Manager if clustered.

To confirm MSDTC is using the correct port:

  1. Open an Administrative command prompt and run Netstat –ano to get the port and the Process Identifier (PID)
  2. Start Task Manager and select Details tab
  3. Find MSDTC.exe and get the PID
  4. Review the output for the PID to show it is MSDTC

clip_image012

Now DTC will be using the port specified in the registry and no other processes will try to use the same port thus preventing an overlap of ports.

Steven Graves
Senior Support Escalation Engineer
Microsoft Core Support

Introducing Script Browser - A world of scripts at your fingertips

$
0
0

To reuse script samples on the Internet, the following steps seem quite familiar to IT Pros: wandering through different script galleries, forums and blogs, switching back and forth between webpages and scripting environment, and countless download, copy and paste operations. But all of these will drive one as dizzy as a goose. Need a simpler way of searching and reusing scripts? Try out the new Script Browser add-in for PowerShell ISE!

Download Here

Script Browser for Windows PowerShell ISE is an app developed by Microsoft Customer Services & Support (CSS) with assistance from the PowerShell team and the Garageto save IT Pros from the painful process of searching and reusing scripts. We start from the 9,000+ script samples on TechNet Script Center. Script Browser allows users to directly search, learn, and download TechNet scripts from within PowerShell ISE – your scripting environment. Starting from this month, Script Browser for PowerShell ISE will be available for download. If you are a PowerShell scripter or are about to be one, Script Browser is a highly-recommended add-in for you.

Nearly 10,000 scripts on TechNetare available at your fingertips. You can search, download and learn scripts from this ever-growing sample repository.

· We enabled offline searchfor downloaded script samples so that you can search and view script samples even when you have no Internet access.

You will get the chance to try out another new function bundled with Script Browser - ‘Script Analyzer’. Microsoft CSS engineer managed to use the PowerShell Abstract Syntax Tree (AST) to check your current script against some pre-defined rules. In this first version, he built 7 pilot PowerShell best practice checking rules. By double-clicking a result, the script code that does not comply with the best practice rule will be highlighted. We hope to get your feedback on this experimental feature.

It is very essential that an app satisfies users’ requirements. Therefore, feedback is of prime importance. For Script Browser, Microsoft MVPs are one of the key sources where we get constructive feedback. When the Script Browser was demoed at the 2013 MVP Global Summit in November and 2014 Japan MVP Open Day, the MVP community proposed insightful improvements. For instance, MVPs suggested showing a script preview before users can decide to download the complete script package. MVPs also wanted to be able to search for script samples offline. These were great suggestions, and the team immediately added the features to the release. We have collected a pool of great ideas (e.g. MVPs also suggested that the Best Practice rules checking feature in Script Analyzer should be extensible). We are committed to continuously improving the app based on your feedback.

We have an ambitious roadmap for Script Browser. For example, we plan to add more script repositories to the search scope. We are investigating integration with Bing Code Search. We are also trying to improve the extensibility of Script Analyzer rules. Some features, like script sample sharing and searching within an enterprise, are still in their infancy.

The Script Browser was released in mid-April and

has received thousands of downloads since it was released a week ago. Based on your feedbacks, today we release the 1.1 update to respond to the highly needed features. The team is committed to making the Script Browser and Script Analyzer useful. Your feedback is very important to us.

Download Script Browser & Script Analyzer 1.1
(If you have already installed the 1.0 version, you will get an update notification when you launch Windows PowerShell ISE.)

1. Options to Turn on / Turn off Script Analyzer Rules

You can either select to turn on or turn off the rules in the Settings window of Script Analyzer.

image

You can also suggest a new Script Analyzer rule or vote for others’ suggestions. Our team monitors the forum closely. Based on your suggestions and votes, we will provide the corresponding Script Analyzer rules in future updates. We are also looking into the capability for you to write your own Script Analyzer rules and plug into the Script Analyzer.

2. Refined Script Analyzer Rules with Detailed Description

Thanks to your feedback, we refined the Script Analyzer rules that were released in the version 1.0. We also fixed all rule issues that you reported. Each rule comes with a detailed description, good/bad examples, and supporting documents. Here are the 5 refined rules released in this update. We look forward to learning your feedback.

Invoke-Expression use should be carefully considered

Invoke-Expression is a powerful command; it’s useful under specific circumstances but can open the door for malicious code being injected. This command should be used judiciously.

http://blogs.msdn.com/b/powershell/archive/2006/11/23/protecting-against-malicious-code-injection.aspx

Cmdlet alias use should be avoided

Powershell is a wonderfully efficient scripting language, allowing an administrator to accomplish a lot of work with little input or effort. However, we recommend you to use full Cmdlet names instead of alias' when writing scripts that will potentially need to be maintained over time, either by the original author or another Powershell scripter. Using Alias' may cause problems related to the following aspects:

Readability, understandability and availability. Take the following Powershell command for an example:

Ls | ? {$_.psiscontainer} | % {"{0}`t{1}" -f $_.name, $_.lastaccesstime}

The above syntax is not very clear to the novice Powershell scripter, making it hard to read and understand.

The same command with the full Cmdlet names is easier to read and understand.

Get-ChildItem | Where-Object {$_.psiscontainer} | ForEach-Object {"{0}`t{1}" -f $_.name, $_.lastaccesstime

Lastly, we can guarantee that an alias will exist in all environments.

For more information, please see the linked Scripting Guy blog on this topic.

http://blogs.technet.com/b/heyscriptingguy/archive/2012/04/21/when-you-should-use-powershell-aliases.aspx

Empty catch blocks should be avoided

Empty catch blocks are considered poor design decisions because if an error occurs in the try block, the error will be simply swallowed and not acted upon. Although this does not inherently lead to undesirable results, the chances are still out there. Therefore, empty catch blocks should be avoided if possible.

Take the following code for an example:

try
{
        $SomeStuff = Get-SomeNonExistentStuff
}
catch
{
}

If we execute this code in Powershell, no visible error messages will be presented alerting us to the fact that the call to Get-SomeNonExistentStuff fails.

A possible solution:

try
{
         $SomeStuff = Get-SomeNonExistentStuff
}
catch
{
        "Something happened calling Get-SomeNonExistentStuff"
}

For further insights:

http://blogs.technet.com/b/heyscriptingguy/archive/2010/03/11/hey-scripting-guy-march-11-2010.aspx

Positional arguments should be avoided

Readability and clarity should be the goal of any script we expect to maintain over time. When calling a command that takes parameters, where possible consider using Named parameters as opposed to Positional parameters.

Take the following command, calling an Azure Powershell cmdlet with 3 Positional parameters, for an example:

Set-AzureAclConfig "10.0.0.0/8" 100 "MySiteConfig" -AddRule -ACL $AclObject -Action Permit

If the reader of this command is not familiar with the set-AzureAclConfig cmdlet, they may not know what the first 3 parameters are.

The same command called using Named parameters is easier to understand:

Set-AzureAclConfig -RemoteSubnet "10.0.0.0/8" -Order 100 -Description "MySiteConfig" -AddRule -ACL $AclObject -Action Permit

Additional reading:

http://blogs.technet.com/b/heyscriptingguy/archive/2012/04/22/the-problem-with-powershell-positional-parameters.aspx

Advanced Function names should follow standard verb-noun naming convention

As introduced in Powershell 2.0, the ability to create functions that mimic Cmdlet behaviors is now available to scripters. Now that we as scripters have the ability to write functions that behave like Cmdlets, we should follow the consistent nature of Powershell and name our advance functions using the verb-noun nomenclature.

Execute the Cmdlet below to get the full list of Powershell approved verbs.

Get-Verb

http://technet.microsoft.com/en-us/magazine/hh360993.aspx

3. Issue Fixes
  • Fixed a locale issue “Input string was not in a correct format..” when Script Browser launches on locales that treat double/float as ‘##,####’. We are very grateful to MVP Niklas Akerlund for providing a workaround before we release the fix.
  • Fixed the issues (including the error 1001, and this bug report) when some users install the Script Browser.
  • Fixed the issues in Script Analyzer rules

We sincerely suggest you give Script Browser a try (click here to download). If you love what you see in Script Browser, please recommend it to your friends and colleagues. If you encounter any problems or have any suggestions for us, please contact us at onescript@microsoft.com. Your precious opinions and comments are more than welcome.

John Marlin
Senior Support Escalation Engineer
Microsoft Global Business Support


‘Tip of the Day’ Top Tips for February

$
0
0

The following links are for the top five tips from the 'Tip of the Day' blog during the month of February.

Tip of the Day: Good Bye VDS, Hello SMAPI

Tip of the Day: Failover DHCP

Tip of the Day: Screenshots on Surface

Tip of the Day: Deduplication and Backups

Tip of the Day: Optimized Files not Available in Down Level OS

NOTE: Tip of the Day is a random daily tip about Microsoft products. The idea behind it harkens back to something I started when I was first hired at Microsoft.  I told myself, "I want to try to learn something new every day. If I can learn at least one thing today, then I can call the day a success." Tip of the day is my attempt to share the things I pick up alone the way.

Robert Mitchell
Senior Support Escalation Engineer
Microsoft Customer Service & Support

2012R2 iSCSI Target Settings for Configuring a Specific Network

$
0
0

Let's say you have a 2012R2 iSCSI Target Server with multiple networks configured. We all know that iSCSI traffic should be on a separate network. So how do you go by configuring iSCSI to use a specific network on the target server? In previous version of Windows this was much easier to find since it was in the iSCSI Target software.

1. Start Server Manager

2. Select File and Storage Services in the right pane

clip_image002

3. In the Servers field right click the server name and select iSCSI Target Settings

clip_image004

clip_image005

Now you don't have to worry about iSCSI traffic going over the wrong network in case a clients networks are not configured properly.

Steven Graves
High Availability Sr. SEE
Microsoft Premier Support

Is Offloaded Data Transfers (ODX) working?

$
0
0

Offloaded Data Transfers (ODX) is a new data transfer strategy that makes advancements in moving files.  Only storage devices that comply with the SPC4 and SBC3 specifications will work with this feature.  With this feature, copying files from one server to another is much quicker.  This feature is only available in Windows 8/2012 and above as both the source and the destination.

This is how it works at very high level:

  • A user copies or moves a file by using Windows Explorer, a command line interface, or as part of a virtual machine migration.
  • Windows Server 2012/2012R2 automatically translates this transfer request into an ODX (if supported by the storage array) and it receives a token that represents the data.
  • The token is copied between the source server and destination server.
  • The token is delivered to the storage array.
  • The storage array internally performs the copy or move and provides status information to the user.

Below is a picture representation of what it looks like.  The top box is the way we are used to seeing it.  If you copy a file from one machine to another, the entire file is copied over the network.  In the bottom box, you see that the token is passed between the machines and the data is transferred on the storage.  This makes copying files tremendously faster, especially if these files are in the gigabytes.

image

For more information on Offloaded Data Transfers, please refer to

Windows Offloaded Data Transfers Overview

Many Windows installations have additional filter drivers loaded on the Storage stack.  This could be antivirus, backup agents, encryption agents, etc.  So you will need to determine if the installed filter drivers support ODX.  As a quick note, if the filter driver supports ODX, but the storage does not (or vice versa), then ODX will not be used.

We have filter manager supported features (SprtFtrs) which will tell if filter drivers support ODX.  We can use the FLTMC command, as shown below, to list filter drivers and their supported features.  For example:

X:\> fltmc instances
Filter      Volume Name    Altitude   Instance Name  Frame  SprtFtrs
----------  -------------  ---------  -------------  -----  --------
FileInfo    C:             45000      FileInfo       0      00000003
FileInfo    I:             45000      FileInfo       0      00000003
FileInfo    D:             45000      FileInfo       0      00000003 <-It supports both Offload Read and Write
FileInfo    K:             45000      FileInfo       0      00000003
FileInfo    \Device\Mup    45000      FileInfo       0      00000003

You can also see the Supported Features available for a filter driver in the registry:

HKLM\system\CurrentControlset\services\<FilterName>

The SupportedFeatures registry value contains an entry. If it is 3 as in the above FLTMC output, is supports ODX.

Now that we have determined that ODX is supported by the required components, is it actually working?  You can see the ODX commands FSCTL_OFFLOAD_WRITE and FSCTL_OFFLOAD_READ captured in a Process Monitor trace when it is working.  When ODX is working, we see the following in Process Monitor.

clip_image001

In case a target fails the Offload, it does not support ODX, or does not recognize the token, it can give a STATUS_INVALID_TOKEN and/or STATUS_INVALID_DEVICE_REQUEST as the result.

Other reasons why it might not work :

1)    Something above Storage stack such as encryption or File system driver (Encryption filter, etc.) can cause it to fail.
2)    Even though two disks/volumes might support offload they might be incompatible. This has to be established by involving the Storage Vendors.

It is not a recommendation, but for informational purposes, you can disable ODX functionality in the registry if so desired.  You can do this  with a PowerShell command:

Set-ItemProperty hklm:\system\currentcontrolset\control\filesystem -Name "FilterSupportedFeaturesMode" -Value 1

Or, you can edit the registry directly.  The value of 1 (false) means that it is disabled while the value of 0 (true) means it is enabled.  When this change is made, you will need to reboot the system for it to take effect.

One last thing to mention is that you should always keep current on hotfixes.  This is especially true if you are running Failover Clustering.  Below are the recommended hotfixes you should be running on Clusters, which include fixes for ODX.

Recommended hotfixes and updates for Windows Server 2012-based failover clusters
Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters

Other references:
http://technet.microsoft.com/en-in/library/jj200627.aspx
http://msdn.microsoft.com/en-us/library/windows/hardware/dn265439(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/hh848056(v=vs.85).aspx

Shasank Prasad
Senior Support Escalation Engineer
Microsoft Corporation

Windows Azure Pack: Infrastructure as a Service Jump Start

$
0
0

Free online event with live Q&A with the WAP team: http://aka.ms/WAPIaaS

Two half-days – Wednesday July 16th& Thursday July 17th– 9am-1pm PST

IT Pros, you know that enterprises desire the flexibility and affordability of the cloud, and service providers want the ability to support more enterprise customers. Join us for an exploration of Windows Azure Pack's (WAP's) infrastructure services (IaaS), which bring Microsoft Azure technologies to your data center (on your hardware) and build on the power of Windows Server and System Center to deliver an enterprise-class, cost-effective solution for self-service, multitenant cloud infrastructure and application services.

Join Microsoft’s leading experts as they focus on the infrastructure services from WAP, including self-service and automation of virtual machine roles, virtual networking, clouds, plans, and more. See helpful demos, and hear examples that will help speed up your journey to the cloud. Bring your questions for the live Q&A!

Register here: http://aka.ms/WAPIaaS

Course Outline:

Day One

  • Introduction to the Windows Azure Pack
  • Install and Configure WAP
  • Integrate the Fabric
  • Deliver Self-Service

Day Two

  • Automate Services
  • Extend Services with Third Parties
  • Create Tenant Experiences
  • Real-World WAP Deployments

Instructor Team

Andrew Zeller | Microsoft Senior Technical Program Manager
Andrew Zeller is a Technical Program Manager at Microsoft, focusing on service delivery and automation with Windows Server, System Center, and the Windows Azure Pack.

Symon Perriman | Microsoft Senior Technical Evangelist
As Microsoft Senior Technical Evangelist and worldwide technical lead covering virtualization (Hyper-V), infrastructure (Windows Server), management (System Center), and cloud (Microsoft Azure), Symon Perriman is an internationally recognized industry expert, author, keynote presenter, executive briefing specialist, and technology personality. He started in the technology industry in 2002 and has been at Microsoft for seven years, working with multiple teams, including engineering, evangelism, and technical marketing. Symon holds several patents and more than two dozen industry certifications, including Microsoft Certified Trainer (MCT), MCSE Private Cloud, and VMware Certified Professional (VCP). In 2013, he co-authored Introduction to System Center 2012 R2 for IT Professionals (Microsoft Press) and he has contributed to five other technical books. Symon co-hosts the weekly Edge Show for IT Professionals, and his technologies have been featured in PC Magazine, Reuters News, and The Wall Street Journal. He graduated from Duke University with degrees in Computer Science, Economics, and Film & Digital Studies, and he also serves as the technical lead for several startups and entertainment production companies.​

Deploying Surface Pro 3 Pen and OneNote Tips

$
0
0

Hi, my name is Scott McArthur and I am Senior Support Escalation Engineer on the Deployment/Devices team. In today’s blog I am going to go over some tips for deploying Surface Pro 3 related to the Pen and OneNote integration.

Tip #1: Deploying custom image to Surface Pro 3

You may have noticed that if you deploy a custom image to Surface Pro 3 the Pen button does not bring up modern OneNote. The image that ships with Surface Pro 3 contains additional update that adds this functionality. If you are deploying a custom image you will need to incorporate that update into your deployment or reference image.

KB2968599: Quick Note-Taking Experience Feature for Windows 8.1

At this time we are working on making this update easier to download but in the meantime you can download it from the following direct link:

http://download.windowsupdate.com/d/msdownload/update/software/crup/2014/06/windows8.1-kb2968599-x64_de4ca043bf6ba84330fd96cb374e801071c4b8aa.msu

Tip #2: Setting default OneNote for the Pen

If you use the Desktop version of OneNote 2013 you may want it to be the default application when the Pen button is pressed. To change the default OneNote application you can set this in OneNote 2013 using following steps

1. Click File, Options
2. Choose Advanced
3. Under Default OneNote Application

image

NOTE: If you do not have this option in OneNote 2013 make sure you have the following update for OneNote installed:

KB2881082: July 8, 2014 update for OneNote 2013

Tip #3: Double click functionality for screenshots

One of the other nice features is the ability to double click the pen button to send a screenshot to OneNote.

Magic Tricks with OneNote and Surface Pro 3
http://blogs.office.com/2014/06/18/magic-tricks-with-onenote-and-surface-pro-3

In order to support this functionality, the Modern OneNote app must be the latest version available from the store.  So, if this functionality does not work, make sure Modern OneNote app has been updated.

If you configure the Desktop OneNote as the default OneNote application, it should work by default with the double-click feature.

Tip #4: Adding Pen pairing to a deployment

During the 1st bootup of the OEM image that ships with the Surface Pro 3, you are prompted during OOBE to pair the pen. If you are deploying a custom image and want to add this setup screen to your deployment (to be completed by technician or user), you can use the following steps:

Requirements:

  • System with the Windows ADK installed
  • Install.wim are you are deploying
  • Another Surface Pro 3 system that has the OEM system image that ships from Microsoft

1. Take one of your existing Surface Pro 3 devices that has the OEM image on it and copy the following files to USB flash drive or other location:

%windir%\system32\oobe\info\default\1033\oobe.xml
%windir%\system32\oobe\info\default\1033\PenPairing_en-US.png
%windir%\system32\oobe\info\default\1033\PenError_en-US.png
%windir%\system32\oobe\info\default\1033\PenSuccess_en-US.png

2. Open Deployment and Imaging Tools Environment cmd prompt

3. Use the DISM command to mount the image you are deploying

Dism /mount-wim /wimfile:c:\install.wim /index:1 /mountdir:c:\mount

4. Create the following folder structure in the image

C:\mount\windows\system32\oobe\info\default\1033

5. Copy all the files from Step #1 above into this folder

6. Close any explorer Windows and switch to C:\ to make sure no open file handles to the c:\mount folder

7. Unmount the image and save changes

Dism /unmount-wim /mountdir:c:\mount /commit

Tip #5: Troubleshooting the pen

For additional information on troubleshooting the pen take a look at:

http://www.microsoft.com/surface/en-us/support/touch-mouse-and-search/troubleshoot-surface-pen#penshows

Hope this helps with your Surface Pro 3 deployments

Scott McArthur
Senior Support Escalation Engineer

Viewing all 270 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>