Archive October 2018

New Release: PowerCLI 11.0.0

New Release: PowerCLI 11.0.0

New Release: PowerCLI 11.0.0

PowerCLI has been moving at quite the rapid pace over the last 2 years. In 2018, we’ve been releasing roughly every other month to make sure we get the latest features, performance improvements, and updates available as quickly as possible. Well, it’s been two months and we’re not going to break this trend. Today, we […] The post New Release: PowerCLI 11.0.0 appeared first on VMware PowerCLI Blog .

VMware Social Media Advocacy

List Computers in Specific OU which are Enabled and Output to CSV

# Out puts computer accounts including status Enabled True or False
# Targets a specific OU
# Lists computer names

# OU Variable to set
$OU_HotWiredUK_location = “OU=Computers,OU=HotWiredHQ,OU=UK,DC=test,DC=com”

# Out put CSV to c:\scripts\…
Get-ADComputer -Properties * -Filter * -SearchBase $OU_HotWiredUK_location | Select Enabled, Name, DistinguishedName | export-csv C:\Scripts\OU_HotWiredUK_location.csv

Qualys – Vulnerability Management Notes

Vulnerability Management


A tool to manage and mitigate vulnerabilities.

My training session covered how to:
1. Scan the Network
2. Manage Host Assets
3. Report on Scans
4. Manage User Accounts
5. Remediate Risk

Things to know :

  • IP ranges of your networks.
  • IP address’s assigned to your Qualys scanners

Vulnerabilities and Scans

  • You can import vulnerability libraries
  • You can run authenticated scans / trusted scans

Ratings and Severities
After a scan has been run:

  • Vulnerability Ratings are Red, Yellow and Blue
  • Severity levels are graded 1-5


  • Group Assets – Note Nested groups isn’t supported
  • Set a business impact attribute to calculate business risk
  • Tag & child tags to your assets which will allow you to create and Operating System Hierarchy


  • Create template based reports
  • Create tickets based on the report outputs

User Management

  • Roles – Scanner, Manager, Unit Manager, Auditor, Reader, Remediation User, Contact
  • Role – Allow access to GUI & API option


  • Assign tasks to users


Files Older Than 3 Months Combined Total File Size

A requirement to identify the total file size of all files not used in the last 3 months.

This was the solution

#Run as administrator
#You need to have permission of the files or folders 

$date = (Get-Date).AddMonths(-3)

dir C:\temp -Recurse | ?{$_.lastwritetime -lt $date -and !$_.PsIsContainer} | Measure-Object -Property Length -Sum

# oneliner

dir C:\temp -Recurse -Force -ErrorAction SilentlyContinue | `

    ?{$_.lastwritetime -lt (Get-Date).AddMonths(-6)} | Measure-Object -Property Length -Sum -ErrorAction SilentlyContinue


# resulting data will be in bytes. To convert them to gigabytes, you may do this:

$files = dir C:\temp -Recurse -Force -ErrorAction SilentlyContinue | `

    ?{$_.lastwritetime -lt (Get-Date).AddMonths(-6)} | Measure-Object -Property Length -Sum -ErrorAction SilentlyContinue

($files.sum / 1gb).ToString(“F02”)

# F02 determines how much digits will appears after comma. In my case – 2 digits.


dir C:\temp -Recurse | ?{$_.psiscontainer} | %{

    Write-Host current folder is $_.fullname;

    dir $_.fullname | measure-object -property Length -sum -ErrorAction SilentlyContinue

} >c:\temp\file sizes

Credit to the  Original post script this is based on

Vembu – Live Webinar – Zero Data Loss

Interested in knowing more on implementing near zero data loss for your IT setup? Come and join a live webinar session that will be hosted by Vembu’s experts due on the following dates:

  • October 3rd, Wednesday at 2PM ET, 11AM PT – for Americas
  • October 4th, Thursday at 11AM GST – for EMEA, APAC & ANZ



What is being discussed:

Let’s consider hypothetically that a disaster has occurred to the data centers. The one thing, 90% of IT admins will focus on doing is trying to get hold of the size of data that’s lost when all they should be doing is continue working on the recovered servers from the previous backup jobs. When most vendors seem to be marketing near Continuous Data Protection(CDP) that assures near zero data data loss- the most critical thing is how near it actually is? This time, you will find experts from Vembu going live on their upcoming webinar-”Towards near-zero data loss. What you need to get right” giving system administrators an insight on tackling this persistent pain point.

So how much disaster-prepared are you? Here’s why joining this webinar session is the one of top things that you should prioritize as October nears.

Why the near CDP rule?: It isn’t all about just replicating your servers to another site during a disaster. It’s about the IT problems that it can resolve and the huge amount of loss in revenue that it can save you from.


It isn’t just about snapshots and replication either! Well, these are there in the storage systems for years together. Getting your RPOs and RTOs right is one of those things that’s very significant. If you are currently running them in a duration of few hours, that’s some serious problem there. What you should be aware is, it definitely can be(should be) brought down to a matter of few minutes.

 Getting near CDP right with the Vembu BDR Suite: Avoiding/Reducing data loss, performing successful recovering checks to knowing different recovery scenarios that would help you recover your physical and virtual setups during any downtime and lot more practices is what we will be dealing with.