Yes, I find command line tools that have a "--dry-run" flag to be very helpful. If the tool (or script or whatever) is performing some destructive or expensive change, then having the ability to ask "what do you think I want to do?" is great.
It's like the difference between "do what I say" and "do what I mean"...
That's what I like about powershell. Every script can include a "SupportsShouldProcess" [1] attribute. What this means is that you can pass two new arguments to you script, which have standardized names across the whole platform:
- -WhatIf to see what would happen if you run the script;
- -Confirm, which asks for confirmation before any potentially destructive action.
Moreover these arguments get passed down to any command you write in your script that support them. So you can write something like:
[CmdletBinding(SupportsShouldProcess)]
param ([Parameter()] [string] $FolderToBeDeleted)
# I'm using bash-like aliases but these are really powershell cmdlets!
echo "Deleting files in $FolderToBeDeleted"
$files = @(ls $FolderToBeDeleted -rec -file)
echo "Found $($files.Length) files"
rm $files
If I call this script with -WhatIf, it will only display the list of files to be deleted without doing anything. If I call it with -Confirm, it will ask for confirmation before each file, with an option to abort, debug the script, or process the rest without confirming again.
I can also declare that my script is "High" impact with the "ConfirmImpact = High" switch. This will make it so that the user gets asked for confirmation without explicitly passing -Confirm. A user can set their $ConfirmPreference to High, Medium, Low, or None, to make sure they get asked for confirmation for any script that declare an impact at least as high as their preference.
I’m a bit confused (because I didnt read the docs)… does calling it with “—whatif” exercise the same code path as calling without, only the “do destructive stuff” automagically doesn’t do anything? Or is it a separate routine that you have to write?
Cause if it is an entirely separate code path, doesn’t that introduce a case where what you say you’ll isn’t exactly what actually happens?
> Or is it a separate routine that you have to write?
If you are writing a function or a module what would do something (eg API wrapper) then of course you need to write it yourself.
But if you are writing just a script for your mundade one-time/everyday tasks and call cmdlets what supports ShouldProcess then it works automagically. Issuing '-whatif' for the script would pass `-whatif` to any cmdlet what has 'ShouldProcess' in it's definition. Of course if someone made a cmdlet with a declared ShouldProcess but didn't write the logic to process it - you are out of luck.
But if have a spare couple of minutes check the docs in the link, it was originally a blog post by kevmarq, not a boring autodoc.
It's the first option. And yes, sometimes you have to be careful if you want to implement SupportsShouldProcess correctly, it's not something you can add willy-nilly. For example, if you create a folder, you can't `cd` there in -WhatIf mode.
The rule we have is that anything that is not idempotent and not run as a matter of daily routine must dry-run by default, and not take action unless you pass --really. This has saved my bacon many times!
idempotency means that f(X) = f(f(X)). Modifying the X inbetween is not allowed.
Is there really an initial environment where rm * ; rm * ; does something different than rm * once?
In the case of any live system, i would say yes. Additional, and different, files could have appeared on the file system in between the times of each rm *.
* is just short hand for a list of files. Calling rm with the same list of files will have the same results if you call it multiple times. That’s idempotent.
Your example is changing the list of files, or arguments to rm between runs. Same as pc85’s example where the timestamp argument changes.
In addition to what einsty said (which is 100% accurate), if you're deleting aged records, on any system of sufficient size objects will become aged beyond your threshold between executions.
Right. You can kind of consider the state of a filesystem on which you occasionally run rm * purges to be a system whose state is made up of ‘stuff in the filesystem’ and ‘timestamp the last purge was run’.
If you run rm * multiple times, the state of the system changes each time because that ‘timestamp’ ends up being different each time.
But if instead you run an rm on files older than a fixed timestamp, multiple times, the resulting filesystem is idempotent with respect to that operation, because the timestamp ends up set to the same value, and the filesystem in every case contains all the files added later than that timestamp.
If there is an rm executable in the current directory, and also one later in your PATH, the second run might use a different rm that could do whatever it wants to
This is actually a likely scenario, as it is common to alias rm to rm -i. Though your bash config will still run after .bashrc is nuked, some might wrap with a script instead of aliasing (e.g., to send items to Trash).
Going further, make it dry run by default and have an --execute flag to actually run the commands: this encourages the user to check the dryrun output first.
All my tools that have a possible destructive outcome use either a interactive stdin prompt or a --live option. I like the idea of dry running by default.
It's like the difference between "do what I say" and "do what I mean"...