There are the commands currently known to this writer. These exist on Windows 8/2012 and up, though memory compression for at least one one item is not present in 8/2012. Do a Get-MMAgent to see what your OS has and what the status is. There are a lot of tweakables, it’s not small, especially on 10. There is a tweakable number immediately visible, “MaxOperationAPIFiles”.
Get-MMAgent
Set-MMAgent
Disable-MMAgent
Enable-MMAgent
Get-Help for the above did not help much. But the Microsoft web page for Disable-MMAgent shows that it will disable bits and pieces. Disabling memory compression is a good idea, it conserves CPU resources, if you have lots of RAM. Items known to be available:
-ApplicationLaunchPrefetching
-ApplicationPreLaunch
-OperationAPI
-PageCombining
-MemoryCompression
-CimSession <CimSession[]>
-ThrottleLimit <Int32>
-AsJob
So to disable only memory compression, out of all of the above functions, just run this PowerShell command:
Disable-MMAgent -MemoryCompression
The above works, even though -MemoryCompression is not listed in the Get-Help items for Disable-MMagent.
Another setting which may help, if you have an SSD or fast RAID, is to increase “the maximum number of prefetch files for scenarios that the Operation Recorder API records”. Default is 256.
Set-MMAgent -MaxOperationAPIFiles 1024
Categories:
Performance
If you see that Windows built-in search components (any of several, including the Indexer, Cortana, etc.) are using a lot of your disk bandwidth, run this in an administrative Powershell:
Add-AppxPackage -Path “C:\Windows\SystemApps\Microsoft.Windows.Cortana_cw5n1h2txyewy\Appxmanifest.xml” -DisableDevelopmentMode -Register
It appears to reset or reload Cortana or a big chunk of it, and probably disable “Development Mode” too. One web reference stated that the above has to be run in a newly created local admin profile to work.
Also, if you’re in a former (or, God forbid, current) SBS environment, make sure the SBS client is removed, and make sure GPO isn’t automatically reinstalling it.
Categories:
Windows OS-Level Issues
Performance
Background and synopsis
The modern web is becoming more and more demanding of web browsers, for a larger and larger proportion of web sites, every year. A lot of people have solid working hardware which cannot accept more than 8 gigabytes of RAM, and often much less. I’m fairly certain that a completely full-blown desktop/laptop of now, with full and smooth online+offline video playing, USB stick automounting, camera/phone photo autoslurping, LibreOffice or Softmaker Office, Thunderbird for email and calendaring, et cetera, cannot be had in less than 8 gigabytes of RAM. If you know differently please do email me! I use Manjaro, XFCE build, which gives me a great balance of:
- nonreinvention of wheels at install,
- amazing buglessness (congrats folks!!!),
- full desktop functionality available,
- ability to easily turn off any bits of desktop functionality which I don’t want, while
- keeping everything familiar and easy to find.
But even in its max of 8G RAM, the aged 2.2 GHz machine I’m typing into right now has had lots of trouble keeping up with my massive multitab web habit, even with sysctl settings, unless I do what is outlined below.
The gist of it is, we compile a kernel, in a very special way. The kernel is the core of any Linux machine – in fact, the kernel is really what Linux is, everything else is other projects which are combined with the Linux kernel to produce a complete machine. This is why one can still find some grammatical purists out there complaining that there is no such thing as a “Linux computer”, that computers are “GNU/Linux”, combinations of GNU software and the Linux kernel. This is helpful to recognize grammatical purists, but certainly no one says “EveryoneElse/Windows” or “EveryoneThatAppleWillTolerate/Apple”, and so I don’t think we need to listen to grammatical purists very much :-) But I digress. The point for this exercise, is that probably every distribution of Linux ships with a kernel compiled in a generic fashion – it will run on the very minimum hardware stated by the distro documentation, and it will run acceptably on some really high maximums too.
But Linux is Linux: we can compile a kernel for our own particular needs, for our particular hardware. And believe you me, there is major advantage in doing so. Sometime, if you find yourself caring enough, dig into the differences between actual models of Intel CPUs from the last twenty years. Twenty years is good because that’s about the maximum range distro kernels have to handle. And the differences are tremendous. Newer ones have far more registers, far more internal simultaneous operations they can do, far more lots and lots of things, things I wouldn’t really understand without possibly hours of study. And I don’t have to know about them; I just find it wonderful that if I compile my kernel for my very own CPU, my entire system suddenly starts to use more of what I have, and things run much better.
Granted, one can find lots of people on the web claiming otherwise. One can find lots of different people on the web. :-)
But simply running CPU-optimized does not address my own problem. I like to read news by opening between six or a dozen browser tabs with right-clicks from front pages. 8 gigabytes of RAM was eaten by this to crash quite easily…until I added a setting to my kernel compilation. I’m still doing CPU-optimization. But I’m also choosing to optimize for size, not performance. Default is performance. Size pulls the miracle :-) I do other memory-intensive things too sometimes, but massive multitabulation seems to be a terrific stress-test.
Sorry for the extended verbiage, but I’m fairly certain some readers won’t be familiar with some of the above, and it forms a good conceptual basis! So here we go.
Here we go!
On modern Manjaro/Arch, there is a new AUR-inclusive package management tool, called ‘yay’. ‘yay’ is in the Manjaro community repo, and this is one small reason among a great many why I like Manjaro! So starting with a well-behaved Manjaro, we install yay:
sudo pacman -S yay
And then we set it up for PKGBUILD editing:
yay --save --editor /usr/bin/nano --editmenu --noansweredit
What is a PKGBUILD, you ask! Well, a PKGBUILD is an Arch-standard file which defines how a package is compiled and installed. Every package, AUR or repo, has one. We have just set up to make it very easy to edit PKGBUILD files for every AUR package we take in by ‘yay’. The full-screen text editor ‘nano’ will be used for editing, and yay will ask us every time if we want to edit. ‘yay’ is very nicely configurable, another set of congrats. It is worthwhile to mention, that we do edit PKGBUILD files only most carefully, packages break trivially with bad edits, and worse things can occur, albeit relatively rarely.
But the next step, is to do a little preparing of the environment. Edit /etc/makepkg.conf, and find these lines:
CFLAGS=”
CXXFLAGS=”
Both lines will be quite long, with close-quotes, containing several items. One of the items in both is -march=
; this needs to be changed from whatever it is to -march=native
. We also need an item added or changed if it exists: we need -mtune=native
. This will make everything we compile, run by far the best on the very make and model CPU we have in this machine. It will also make the packages not run well on anything else, and not run at all on some machines! Fair warning :-)
Also in this file, find a line starting with this:
#MAKEFLAGS="-j
There will be a number to the right and a close quote. Find out how many CPU cores your machine has, and add one; so if you have a dual core, you’ll add this line just below the original:
MAKEFLAGS="-j3"
This speeds up package compilation a lot, even with just two cores, and enormously with 4 or more.
There is one more item to prepare. In this file (“~” means your home directory):
~/.gnupg/gpg.conf
you’ll want to add the following:
keyserver-options auto-key-retrieve
auto-key-locate hkp://pool.sks-keyservers.net
This eliminates the need to manually receive and approve GPG signing keys for various source files as they are downloaded.
So once we have the above done, we need to choose our kernel package. Most recently I have been using “linux-uksm”, one of many Linux kernels available through AUR with sets of performance patches. All of them have different strengths, and many of them do not make it easy to customize. Thus far linux-uksm has done extremely well. So we begin:
yay -S linux-uksm
We give the default responses by pressing Enter/Return, until it asks if we want to edit the PKGBUILD:
:: Parsing SRCINFO (1/1): linux-uksm
1 linux-uksm (Installed) (Build Files Exist)
==> PKGBUILDs to edit?
==> [N]one [A]ll [Ab]ort [I]nstalled [No]tInstalled or (1 2 3, 1-3, ^4)
==>
For this, type just the number 1 (that’s a package count number, in case you were running ‘yay’ on multiple packages at once), and press Enter/Return. The PKGBUILD will come up in the text editor ‘nano’. As of this writing, it looks like this:
# Maintainer: Piotr Gorski <lucjan.lucjanov@gmail.com>
# Contributor: Jan Alexander Steffens (heftig) <jan.steffens@gmail.com>
# Contributor: Tobias Powalowski <tpowa@archlinux.org>
# Contributor: Thomas Baechler <thomas@archlinux.org>
### BUILD OPTIONS
# Set these variables to ANYTHING that is not null to enable them
### Tweak kernel options prior to a build via nconfig
_makenconfig=
### Tweak kernel options prior to a build via menuconfig
_makemenuconfig=
### Tweak kernel options prior to a build via xconfig
_makexconfig=
### Tweak kernel options prior to a build via gconfig
_makegconfig=
[ PKGBUILD -- 386 lines ]
^G Get Help ^O Write Out ^W Where Is ^K Cut Text ^J Justify ^C Cur Pos
^X Close ^R Read File ^\ Replace ^U Paste Text^T To Spell ^_ Go To Line
Using the cursor buttons on your keyboard, move the text cursor to the position just to the right of the line _makexconfig=
. Press the Y letter key once. You have now instructed the PKGBUILD engine to build and run a kernel compilation configurator application, called ‘xconfig’, as the next step. Press Ctrl-X to initate closure of nano, press Y to confirm saving, press Return/Enter to save and close the file. Because a kernel PKGBUILD is a large and multifaceted thing, yay may bring up one or more files; keep on pressing Ctrl-X until you’re back at the ‘yay’ prompt (at this writing, just one additional Ctrl-X):
==> PKGBUILDs to edit?
==> [N]one [A]ll [Ab]ort [I]nstalled [No]tInstalled or (1 2 3, 1-3, ^4)
==> 1
==> Proceed with install? [Y/n]
Press the Y key, and Return/Enter. The source code for the kernel will download, and then xconfig will be compiled, and will be run. It’s GUI, and the big advantage for us right now, is the search works very well. There are a truly enormous number of changes that can be made, almost all of them can result in an unusable kernel (or sometimes even a kernel that can do harm over time); we want exactly two.
So click the edit menu, choose Find, and type in the word “native”, no quotes. A nice GUI chooser circle will appear to the left of “Native optimizations autodetected by GCC”. Click that GUI chooser to fill the circle. This is the step which will engage all, or nearly all (depending on version of the GNU compiler and how new your CPU is!), of the features of the very CPU you are using, in the kernel which will shortly be compiled.
Then, click in the Find box field, and type the word “size”, no quotes. Another nice GUI chooser circle will appear, to the left of “Optimize for size”. Click that GUI chooser to fill the circle. This is the step which will optimize for size (memory-usage efficiency), not performance. Default is performance.
Once the above is done, close the Find window using basic GUI, usually the extreme upper-right corner of the window. Click the File menu, choose Save. Then File menu, Quit. Kernel compilation will begin.
Depending on how much RAM and how many CPU cores you have, compilation can take a very long time. If just one core and (say) 1 gigabyte of RAM, this is probably several hours. It will complete the kernel install, after it’s done. However, at least as of this writing, you’ll have to do the following to get the kernel headers (essential) and docs (admittedly optional) in:
cd ~/.cache/yay/linux-uksm
sudo pacman -U linux-uksm*pkg.tar.xz
Also be aware, if you have compiled more than one linux-uksm kernel, the above will attempt to reinstall them all, you may want to clean house first :-)
After you reboot, your new kernel will probably be active. You can use uname -a
to verify this with certainty, though if you’ve never done this before you’ll probably notice a performance jump immediately. If you would like more control of the kernel you run at any time, grub-customizer
is highly recommended.
Categories:
Performance
Linux OS-level Issues
There is a tool to do the job; but Microsoft has a way to remove everything preinstalled on your new machine and slowing it down:
https://www.pcmag.com/news/348679/how-clean-up-windows-10-with-the-refresh-windows-tool
It does remove everything but Windows, so it is only suitable for completely new machines.
Categories:
Performance
Cleanup
The command is CONTIG (also available in 64-bit as CONTIG64), and it is a Sysinternals:
https://docs.microsoft.com/en-us/sysinternals/downloads/contig
You’ll want to put the appropriate binary in C:\Windows
. Run it like this, in administrative CMD, it will get all of the hiddens it can for C drive (this is 64-bit):
contig64 -nobanner -accepteula C:$Mft
contig64 -nobanner -accepteula C:$LogFile
contig64 -nobanner -accepteula C:$Volume
contig64 -nobanner -accepteula C:$AttrDef
contig64 -nobanner -accepteula C:$Bitmap
contig64 -nobanner -accepteula C:$Boot
contig64 -nobanner -accepteula C:$BadClus
contig64 -nobanner -accepteula C:$Secure
contig64 -nobanner -accepteula C:$UpCase
contig64 -nobanner -accepteula C:$Extend
Notice the distinct lack of slashes in the above!
Categories:
Disks, Drives, and Filesystems
Performance
The command is CONTIG (also available in 64-bit as CONTIG64), is a Sysinternals:
https://docs.microsoft.com/en-us/sysinternals/downloads/contig
It defrags, and does it very well. It does it file by file. Here’s a command probably suitable for background operation on a whole C drive, on a 64-bit machine, quiet mode:
start /LOW contig64 -s -q C:\*
Categories:
Disks, Drives, and Filesystems
Performance
Really good article here:
http://www.itquibbles.com/sql-sbsmonitoring-high-disk-usage/
Solves the problem of the database reaching max capacity, and also speeds things up in general.
Short version:
In SBS 2008, run the contents of this zip file in an administrative PowerShell window.
In SBS 2011, start this shell as administrator:
C:\Program Files\Windows Small Business Server\Bin\MoveDataPowerShellHost.exe
and then while in the shell, run the contents of this zip file.
If it says “1 row affected”, it’s done, and the messages will point out old MDF and LDF files to remove.
You may notice that the script linked here is just a tad different than the one on the itquibbles page; this one just adds the -force items mentioned as an option on that page.
Categories:
Windows OS-Level Issues
Performance
Categories:
Hardware
Performance
These steps can improve Windows performance a whole lot. It works because a vast array of different applications and services in Windows utilize VSS on their backends. All of the below, except for one server-only step sometimes needed, is encapsulated in PowerShell script (3.0 and up) OVSS.ps1 , part of the windows-tools project.
To do the VSS optimization interactively, start an administrative CMD, and then…
Step 1:
vssadmin Delete Shadows /All
If there are orphan shadows, you will be asked whether you want to delete them. If there are and you delete them, you will see immediate performance benefit. Reportedly, Windows autodeletes them only after there are 64 per volume. We prefer to see zero! These build up as a result of bad shutdowns, drive and drive controller issues, and insufficient RAID resources to serve demands.
Step 2:
We now improve any existing preassociation of disk space for VSS. On some machines, this will increase performance very impressively, immediately. In general it keeps them smooth and stable and prevents hesitations. This does not reserve or take up the space, it just “associates” it, makes it ready for use, so that whenever Windows wants to do any of the bajillions of things it does with VSS, things ranging from tiny to enormous, it can skip that step.
It is worthwhile to know that C: on all workstation installs and many server installs, has a minimal preassociation already set up. And we should check to see if it has been done. So the first step is to check it. Do the below:
vssadmin list shadowstorage
If it gives you something like this:
vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
(C) Copyright 2001-2013 Microsoft Corp.
Shadow Copy Storage association
For volume: (\\?\Volume{84214e3c-0000-0000-0000-100000000000}\)\\?\Volume{84214e3c-0000-0000-0000-100000000000}\
Shadow Copy Storage volume: (\\?\Volume{84214e3c-0000-0000-0000-100000000000}\)\\?\Volume{84214e3c-0000-0000-0000-100000000000}\
Used Shadow Copy Storage space: 0 bytes (0%)
Allocated Shadow Copy Storage space: 0 bytes (0%)
Maximum Shadow Copy Storage space: 100 MB (20%)
Shadow Copy Storage association
For volume: (C:)\\?\Volume{84214e3c-0000-0000-0000-501f00000000}\
Shadow Copy Storage volume: (C:)\\?\Volume{84214e3c-0000-0000-0000-501f00000000}\
Used Shadow Copy Storage space: 0 bytes (0%)
Allocated Shadow Copy Storage space: 0 bytes (0%)
Maximum Shadow Copy Storage space: 373 GB (20%)
where “Maximum Shadow Copy Storage space: “ for each volume is set to 20%, the rest has been done, you are fully optimized. Otherwise, if this is a desktop OS, we resize the existing association for each volume. For volumes without letters, and to pull a list of all VSS-ready volumes, see the note towards the end of this document.
So for the C drive, do the below in administrative CMD:
vssadmin Resize ShadowStorage /For=C: /On=C: /MaxSize=20%
Do repeat for any other active hard drive partitions, D:, E:, et cetera. Don’t worry if you get an error, the next step covers it.
Step 3:
It may well throw an error, saying there is no such association. If this is a workstation OS, vssadmin lacks two commands which we need for any further steps, so in that case we are done. But on any Windows Server OS from 2008R2, if the error was thrown, we do an Add:
vssadmin Add ShadowStorage /For=E: /On=E: /MaxSize=20%
Step 4:
And finally (server only), one more thing which can help if, for instance, C: is almost full but E: has plenty of space:
vssadmin Delete ShadowStorage /For=C: /On=C:
vssadmin Add ShadowStorage /For=C: /On=E: /MaxSize=20%
This maximizes overall performance, and also prevents possible backup failures and other issues due to insufficient disk space on C:.
Note:
On some machines, the volumes do not have letters. For these you will need to use the volume GUID path. In vssadmin list shadowstorage
, they look like this:
Shadow Copy Storage association
For volume: (\\?\Volume{99ac05c7-c06b-11e0-b883-806e6f6e6963}\)\\?\Volume{99a
c05c7-c06b-11e0-b883-806e6f6e6963}\
Shadow Copy Storage volume: (\\?\Volume{99ac05c7-c06b-11e0-b883-806e6f6e6963}
\)\\?\Volume{99ac05c7-c06b-11e0-b883-806e6f6e6963}\
Used Shadow Copy Storage space: 0 B (0%)
Allocated Shadow Copy Storage space: 0 B (0%)
Maximum Shadow Copy Storage space: 32 MB (32%)
For such a situation, substitute \\?\Volume{99ac05c8-c06b-11e0-b883-806e6f6e6963}
(the whole long string) for C:
in the above command lines.
PowerShell will give GUI paths for all volumes thusly:
GWMI -namespace root\cimv2 -class win32_volume
References are here:
https://technet.microsoft.com/en-us/library/cc788050.aspx
https://www.storagecraft.com/support/kb/article/289
http://backupchain.com/i/how-to-delete-all-vss-shadows-and-orphaned-shadows
http://www.tech-no.org/?p=898
Categories:
VSS
Performance
with the right router/firewall. I’ve had at least three different Netgears at home over years, all mid- or mid-high range in their consumer range at purchase. Every time, I tested using OEM up-to-date firmware, and tested with DD-WRT, many tweaks on both. DD-WRT gave a little improvement. On a little divine inspiration, I just did this:
- Took a ten-year-old quad-core Vista box with three gigs of RAM
- Put in a $40 quad Intel server NIC I bought from Amazon.com
- Installed pfSense and set it up in very default fashion, exceptions being use of 192.168.2.0/24 as LAN subnet, 192.168.2.1 as LAN IP. Not using the motherboard NIC, just two on the Intel card so far.
- Set my current DD-WRTed Netgear to do DHCP forwarding instead of serving, set it static to 192.168.2.2, left it otherwise alone
- Connected one LAN port of the Netgear to the LAN port I set up in pfSense
- Disconnected the WAN port of the Netgear, plugged Internet directly into the WAN port in pfSense
Suddenly WWW and Roku respond much faster, much less latency and jitter and other delay, and most unexpectedly, Internet download speed is much, much faster, even though the wifi is still running through the Netgear. And after a bit of performance tweaking, pings are lower, from 28ms down to 22 wired and 24 wireless.
Haven’t tried Squid proxying yet, or IPv6, but will be!
Categories:
Performance
Router/Firewall Configuration