Engine Improvement with a Negative Ion Generator
article #1267, updated 2 days ago

I’ve never been a profound and excellent engine guy, and with computers controlling them I have become even less and less of one over time. But theoretically, one should be able to improve anything at least a bit, and I may have stumbled on a way.

A negative ion generator, is a thing used commonly to improve air quality. It “ionizes” molecules and atoms in air, giving them negative electrical charges. This observably coalesces dust particles which fall, and also destroys odors. One can find more health-related reports about them too. Reportedly, right after a thunderstorm, most of the “invigoration” one encounters in the air, is negative ionization.

There used to be “negative ionizer” widgets which were little bricks that plugged in and hung onto wall power sockets; these did help, but the dust tended to coalesce and adhere around a few inches radius of the device, on the wall et cetera, which is why we don’t see those much anymore! But they are being built into air conditioners quite a lot now, even the window air conditioner we bought a year or two ago has one inside it. Little ones with fans are now quite available from a few different companies.

And I do enjoy testing the walls of my current box, so, thought I, I wonder what would happen if we charged the air going into our friendly household truck engine. I have a 1998 Tahoe, 5.7L EFI V8, which underwent some mods before she asked to come into our life (her name is Bertha, she is a big girl with a very low voice)…and she has a certain amount of airspace available in her engine compartment, so I thought, why not. I remember just enough physical chemistry (which I mostly failed) to be dangerous, and the idea of adding electrons to air molecules and atoms to make them more reactive, sounds like a way to get a very nicely excited sort of energy into her heart. After all, it’s not how much energy you have, it’s the preparation of that energy into usable form. We have enormous amounts of unused chemical energy in every engine cycle; if we can bleed off a little engine power electrically to get a noticeably helpful net result, that’s a definite gain.

So I ordered one of these,

after a lot of looking around, from Alanchi on AliExpress. The pic is for the 12VDC version, it comes in 110VAC and 220VAC too; I ordered the 12VDC of course, to wire straight into existing electrical. It is advertised as a 30 million particle per cm3 negative ionizer, which is much more powerful than any of the others I could find, except one which is 220VAC only from the same source. That one is at 100 million particles per cm3…but I’m not going to try to engineer 220VAC under Bertha’s hood ☺ Also unlike what I have seen in the past, this class of ionizer throws its output off little carbon brushes, rather than rows of thin and sharp metal needles. I have seen the metal needles degrade over time, due to corrosion and possibly more interesting behaviors (I saw what looked like a slow-moving, brightly glowing spark, rising off a needle, on at least two or three occasions); the carbon brushes strike me as a very good idea.

If you are in the U.S., you’ll spend a lot of money on shipping from AliExpress unless you are willing to wait a long time; I waited a long time ☺ and I don’t regret it, it gave me time to think about setting this up in as durable a fashion as possible, which we really do want in an engine compartment. We don’t want to cause ourselves electrical problems of any sort, bad ones are very bad; we have to be careful. One nice thing, this 30 M/cm3 ionizer element is only using 1 watt of power, just a tiny sip.

Do note that what we want is explicitly not an “ozone generator”. Ozone is a peculiar and less stable molecular form of oxygen, and it is both poisonous and corrosive. We do not want any noticeable amount of this, in regular contact with anything we care to keep. It is used sometimes as a cleansing agent, to kill invasive bugs and other unpleasantries, but it is not what we are after here. Most if not all electronics produce very tiny amounts of ozone, thunderstorms produce more; the devices we want for this purpose explicitly produce only infinitesimal amounts, and they are explicitly rated for this as well, because years ago this was not done so carefully, there was confusion.

It is also true that I am at least a tad concerned with possible corrosion in this build. Ionization means reactivity: various components of air are being made more likely to do chemical reactions with things they encounter, than they would otherwise. So be warned, if you try this you taking risk just as I am; I have no idea what this is doing to various sensors and other bits! As of this writing, 2019-02-18, the project has been progressing about three months, and no evil signs yet, and some definite good. I’ll be updating as I learn things and encounter things happening; see the “results” section at the bottom if you wish ☺ One idea which did come up recently, was checking the spark plugs, this can be one way to find badness in the cylinders; I’ll have them changed out soon so I can get multiple experienced opinions.

And back to work. I set up the electricals as well as I know how, with crimp-on terminals for every wire, because I intend to run with it in the long term, and Kansas sometimes (and never always) gets very cold winters, very hot summers, wet springs, etc. Power comes from the fuse box, using something called a “fuse tap”,

which I learned about through web-searching; I found the AutoZone three blocks from our home had the ones for Bertha in stock. You take out an existing fuse, plug the fuse tap in, and then plug the old fuse and a second new fuse into the fuse tap’s own sockets. The fuse tap has a wire end to crimp onto, and that runs to the widget needing the power.

We could wire straight to the battery, but that would mean opening the hood to switch it on and off every time. Since we want this widgetry to always have power with the engine, we use the fuse tap, and choose the fuse socket accordingly. The one marked “IGN” (“ignition” I think) is working well for me, though I saw one bit of web-advice against it for unclear reasons, probably related to applications pulling a lot more power than this one. On Bertha, IGN is also the only socket of the correct size in the fuse box under the hood: 10A, which is maximum for all of the fuse taps I found for this vehicle. My friendly neighborhood AutoZone guy also was fast and accurate in finding crimp-on terminals that fit on the switch I wired in (further down), and other items.

So I ran a new wire from the fuse tap, all the way around the back of the engine compartment, threading through items which don’t get hot to hold it in place, to a little switch with a light in it, so I could know for certain when the widget is powered, and so I could shut it off if something happened in certain categories ☺ I followed the simple wording on the switch (+12VDC here, accessory wire there, ground there), and grounded both the device and the switch directly to the battery. (I have since encountered advice saying that things should be grounded to the chassis or engine ground, which will be done fairly soon.) Then I drilled four small holes, one for each of the carbon brushes, in the casing for the air filter. This is emphatically pre-filter, not post-filter, because I don’t care how strong those brushes are, I don’t want bits of them ever going into Bertha’s engine! Here’s the result:

Normally everything sits in that little cavity just under the switch, I pulled it all out for the pic. You’ll notice the four wires going into the air filter casing. I used a very nice epoxy from JB Weld advertised to bond any plastic; it works very well, highly recommendable. Unlike other products, it does not make you wonder how much destruction you are doing to your lungs!

It is true that I will end up regretting using epoxy if/when I eventually have to replace the ionizer unit, but that’s fine, that’s what cordless electric drills are for ☺ Also I still don’t know what I would/will use instead of the epoxy. It seems important to hold those wire ends so they don’t get sucked onto the air filter surface, or flap around a lot; they’re sticking through only about an inch.

I can imagine a little metal screw-in stud with a hole in the middle for the wire, but I don’t know what it’s commonly called, or if anyone is actually making them right now ☺ They probably are, these days. If it becomes desireable, I’ll probably send up a RFQ (request for quote) to MFG or AliExpress or something. Wording is the problem then, and the fact that although I might have seen one or two of these in the dim mists of memory, I don’t have precision for it, and my drawing skills aren’t great. I once taught myself rudiments of the DOS version of AutoCAD though, maybe I could revisit that kind of graphics; someone must have an open-source CAD these days, right…?

Results

Here’s the summary.

The device has been in place for about 3 months as of this writing. It went in in the dead of a very cold winter, ranging from -5 to +35 F (-21 to +2 C) or so. The first set of results came using just one of the negative ionizer widgets discussed above:

  • Cold running. In the extremer cold, Bertha used to sound a bit strained until warm, like many other engines I’ve heard. Not anymore. Even stone cold, at minus five, the gas pedal seemed to have about as much juice as warm. She probably burned a good bit more gas doing it, but was much happier to run than without!
  • Starting. Bertha has never had real trouble starting once I gave her a really good (and pricey) battery and new starter, just normal behavior. But now it’s not normal. Hot or one-hour-warm, she takes off, probably turning over once. Cold, one turns the ignition on for a few seconds to run the fuel pump and charge that air…and she turns over just a bit and righto she goes.
  • Idle has an interesting sound change (she does have glasspacks…), very very regular, and when hot quite a lot less in volume, clearly doing more motion with less.

After about two months, I put in two more ionizer elements, a total of three. Just one of the electronics boxes visible, there is actually quite a lot of room underneath there.

  • Definitely more power at all times. My sweet Lori, who is not often very impressed with my occasional forays into unusual creative [some might say bizarre] engineering, actually commented on this as we hit the freeway together for the first time after the third went in.
  • We have had some 80-degree days, and on just one of them, there were two interesting incidents, about a quarter-second each. These were moments after Bertha had been idling a minute or three, and perhaps, building up a lot of charge from the widgets. Each time, for about 1/4 second after pressing the gas very lightly to move, RPMs rose surprisingly fast, and there was a sound something like a rushing wind in the exhaust manifold I think. Hasn’t happened since. Am quite curious as to what this was.

A couple of weeks ago, sweet Lori and I did two careful fillups at the same pump at the same station and exactly the same route and approximate speeds, and ran two there-and-backs to Lawrence, Kansas, about 30 miles away. There was about 3% (half a gallon) of gas less used with the widgets on, than off. I won’t say that’s clear and present advantage, because 3% isn’t much, and you’d really want to do that testing on a dyno. But it’s not nothing, and it was a rainy day with very wet air, the very condition most likely to hinder the air chargers. Very much looking forward to more testing.

It’s also true that I was surprised at how little change there was when the air chargers were turned off, given the experiences above. I am theorizing that a lot of the overall effects so far, may be due to a simple general cleaning effect of having the charged air running through.

That theory will be tested. Stage II is in development :) That’s at least three and possibly six air charger widgets more, which won’t fire until the engine is off idle. Low/hot idle is 550 RPM on Bertha’s V8, the current plan is to have Stage II kick on at 1000 RPM or so. She cruises 65-70 MPH at 2000, we will see what overall behavior indicates. Getting this to work appears to be a bit of a challenge :)

Do drop me a line if you have questions, are interested, or try it!!!

Categories:      

==============

Office 365 and Exchange Online Product and License Lists
article #1195, updated 3 days ago

There are many different products / licenses for Office 365, in several categories. The first item has links to the rest:

Business, general

Small business

Education

Government

Nonprofits

Home

Firstline Workers

The above suggested by the excellent Tharin Brown.

Categories:      

==============

Speed in Lower RAM on Manjaro and Arch Linux, With a Custom Kernel
article #1294, updated 6 days ago

Background and synopsis

The modern web is becoming more and more demanding of web browsers, for a larger and larger proportion of web sites, every year. A lot of people have solid working hardware which cannot accept more than 8 gigabytes of RAM, and often much less. I’m fairly certain that a completely full-blown desktop/laptop of now, with full and smooth online+offline video playing, USB stick automounting, camera/phone photo autoslurping, LibreOffice or Softmaker Office, Thunderbird for email and calendaring, et cetera, cannot be had in less than 8 gigabytes of RAM. If you know differently please do email me! I use Manjaro, XFCE build, which gives me a great balance of:

  • nonreinvention of wheels at install,
  • amazing buglessness (congrats folks!!!),
  • full desktop functionality available,
  • ability to easily turn off any bits of desktop functionality which I don’t want, while
  • keeping everything familiar and easy to find.

But even in its max of 8G RAM, the aged 2.2 GHz machine I’m typing into right now has had lots of trouble keeping up with my massive multitab web habit, even with sysctl settings, unless I do what is outlined below.

The gist of it is, we compile a kernel, in a very special way. The kernel is the core of any Linux machine – in fact, the kernel is really what Linux is, everything else is other projects which are combined with the Linux kernel to produce a complete machine. This is why one can still find some grammatical purists out there complaining that there is no such thing as a “Linux computer”, that computers are “GNU/Linux”, combinations of GNU software and the Linux kernel. This is helpful to recognize grammatical purists, but certainly no one says “EveryoneElse/Windows” or “EveryoneThatAppleWillTolerate/Apple”, and so I don’t think we need to listen to grammatical purists very much :-) But I digress. The point for this exercise, is that probably every distribution of Linux ships with a kernel compiled in a generic fashion – it will run on the very minimum hardware stated by the distro documentation, and it will run acceptably on some really high maximums too.

But Linux is Linux: we can compile a kernel for our own particular needs, for our particular hardware. And believe you me, there is major advantage in doing so. Sometime, if you find yourself caring enough, dig into the differences between actual models of Intel CPUs from the last twenty years. Twenty years is good because that’s about the maximum range distro kernels have to handle. And the differences are tremendous. Newer ones have far more registers, far more internal simultaneous operations they can do, far more lots and lots of things, things I wouldn’t really understand without possibly hours of study. And I don’t have to know about them; I just find it wonderful that if I compile my kernel for my very own CPU, my entire system suddenly starts to use more of what I have, and things run much better.

Granted, one can find lots of people on the web claiming otherwise. One can find lots of different people on the web. :-)

But simply running CPU-optimized does not address my own problem. I like to read news by opening between six or a dozen browser tabs with right-clicks from front pages. 8 gigabytes of RAM was eaten by this to crash quite easily…until I added a setting to my kernel compilation. I’m still doing CPU-optimization. But I’m also choosing to optimize for size, not performance. Default is performance. Size pulls the miracle :-) I do other memory-intensive things too sometimes, but massive multitabulation seems to be a terrific stress-test.

Sorry for the extended verbiage, but I’m fairly certain some readers won’t be familiar with some of the above, and it forms a good conceptual basis! So here we go.

Here we go!

On modern Manjaro/Arch, there is a new AUR-inclusive package management tool, called ‘yay’. ‘yay’ is in the Manjaro community repo, and this is one small reason among a great many why I like Manjaro! So starting with a well-behaved Manjaro, we install yay:

sudo pacman -S yay

And then we set it up for PKGBUILD editing:

yay --save --editor /usr/bin/nano --editmenu --noansweredit

What is a PKGBUILD, you ask! Well, a PKGBUILD is an Arch-standard file which defines how a package is compiled and installed. Every package, AUR or repo, has one. We have just set up to make it very easy to edit PKGBUILD files for every AUR package we take in by ‘yay’. The full-screen text editor ‘nano’ will be used for editing, and yay will ask us every time if we want to edit. ‘yay’ is very nicely configurable, another set of congrats. It is worthwhile to mention, that we do edit PKGBUILD files only most carefully, packages break trivially with bad edits, and worse things can occur, albeit relatively rarely.

But the next step, is to do a little preparing of the environment. Edit /etc/makepkg.conf, and find these lines:

CFLAGS=”
CXXFLAGS=”

Both lines will be quite long, with close-quotes, containing several items. One of the items in both is -march=; this needs to be changed from whatever it is to -march=native. We also need an item added or changed if it exists: we need -mtune=native. This will make everything we compile, run by far the best on the very make and model CPU we have in this machine. It will also make the packages not run well on anything else, and not run at all on some machines! Fair warning :-)

Also in this file, find a line starting with this:

#MAKEFLAGS="-j

There will be a number to the right and a close quote. Find out how many CPU cores your machine has, and add one; so if you have a dual core, you’ll add this line just below the original:

MAKEFLAGS="-j3"

This speeds up package compilation a lot, even with just two cores, and enormously with 4 or more.

There is one more item to prepare. In this file (“~” means your home directory):

~/.gnupg/gpg.conf

you’ll want to add the following:

keyserver-options auto-key-retrieve
auto-key-locate hkp://pool.sks-keyservers.net

This eliminates the need to manually receive and approve GPG signing keys for various source files as they are downloaded.

So once we have the above done, we need to choose our kernel package. Most recently I have been using “linux-uksm”, one of many Linux kernels available through AUR with sets of performance patches. All of them have different strengths, and many of them do not make it easy to customize. Thus far linux-uksm has done extremely well. So we begin:

yay -S linux-uksm

We give the default responses by pressing Enter/Return, until it asks if we want to edit the PKGBUILD:

:: Parsing SRCINFO (1/1): linux-uksm
  1 linux-uksm                               (Installed) (Build Files Exist)
==> PKGBUILDs to edit?
==> [N]one [A]ll [Ab]ort [I]nstalled [No]tInstalled or (1 2 3, 1-3, ^4)
==>  

For this, type just the number 1 (that’s a package count number, in case you were running ‘yay’ on multiple packages at once), and press Enter/Return. The PKGBUILD will come up in the text editor ‘nano’. As of this writing, it looks like this:

# Maintainer: Piotr Gorski <lucjan.lucjanov@gmail.com>
# Contributor: Jan Alexander Steffens (heftig) <jan.steffens@gmail.com>
# Contributor: Tobias Powalowski <tpowa@archlinux.org>
# Contributor: Thomas Baechler <thomas@archlinux.org>

### BUILD OPTIONS
# Set these variables to ANYTHING that is not null to enable them

### Tweak kernel options prior to a build via nconfig
_makenconfig=

### Tweak kernel options prior to a build via menuconfig
_makemenuconfig=

### Tweak kernel options prior to a build via xconfig
_makexconfig=

### Tweak kernel options prior to a build via gconfig
_makegconfig=

                           [ PKGBUILD -- 386 lines ]
^G Get Help  ^O Write Out ^W Where Is  ^K Cut Text  ^J Justify   ^C Cur Pos
^X Close     ^R Read File ^\ Replace   ^U Paste Text^T To Spell  ^_ Go To Line

Using the cursor buttons on your keyboard, move the text cursor to the position just to the right of the line _makexconfig=. Press the Y letter key once. You have now instructed the PKGBUILD engine to build and run a kernel compilation configurator application, called ‘xconfig’, as the next step. Press Ctrl-X to initate closure of nano, press Y to confirm saving, press Return/Enter to save and close the file. Because a kernel PKGBUILD is a large and multifaceted thing, yay may bring up one or more files; keep on pressing Ctrl-X until you’re back at the ‘yay’ prompt (at this writing, just one additional Ctrl-X):

==> PKGBUILDs to edit?
==> [N]one [A]ll [Ab]ort [I]nstalled [No]tInstalled or (1 2 3, 1-3, ^4)
==> 1

==> Proceed with install? [Y/n] 

Press the Y key, and Return/Enter. The source code for the kernel will download, and then xconfig will be compiled, and will be run. It’s GUI, and the big advantage for us right now, is the search works very well. There are a truly enormous number of changes that can be made, almost all of them can result in an unusable kernel (or sometimes even a kernel that can do harm over time); we want exactly two.

So click the edit menu, choose Find, and type in the word “native”, no quotes. A nice GUI chooser circle will appear to the left of “Native optimizations autodetected by GCC”. Click that GUI chooser to fill the circle. This is the step which will engage all, or nearly all (depending on version of the GNU compiler and how new your CPU is!), of the features of the very CPU you are using, in the kernel which will shortly be compiled.

Then, click in the Find box field, and type the word “size”, no quotes. Another nice GUI chooser circle will appear, to the left of “Optimize for size”. Click that GUI chooser to fill the circle. This is the step which will optimize for size (memory-usage efficiency), not performance. Default is performance.

Once the above is done, close the Find window using basic GUI, usually the extreme upper-right corner of the window. Click the File menu, choose Save. Then File menu, Quit. Kernel compilation will begin.

Depending on how much RAM and how many CPU cores you have, compilation can take a very long time. If just one core and (say) 1 gigabyte of RAM, this is probably several hours. It will complete the kernel install, after it’s done. However, at least as of this writing, you’ll have to do the following to get the kernel headers (essential) and docs (admittedly optional) in:

cd ~/.cache/yay/linux-uksm
sudo pacman -U linux-uksm*pkg.tar.xz

Also be aware, if you have compiled more than one linux-uksm kernel, the above will attempt to reinstall them all, you may want to clean house first :-)

After you reboot, your new kernel will probably be active. You can use uname -a to verify this with certainty, though if you’ve never done this before you’ll probably notice a performance jump immediately. If you would like more control of the kernel you run at any time, grub-customizer is highly recommended.

Categories:      

==============

Recompiling for Performance on Arch Linux and Derivatives
article #1196, updated 7 days ago

At the core, any current desktop OS is running binary code; and the vast majority of it is binary code which uses only a subset of the CPU at hand. This is because there are so many different CPUs which need to run the same code. Intel and AMD constantly add things to the CPUs they put out, but code of general distribution lags very far behind, because when one downloads and installs, that code set has to run on everything, whether it be ten years old or three months old. On an excellent Linux, one can recompile any package in such a way that the binary code resultant uses the entire CPU which one is using. Performance gains are thus available.

Most of the advice I have been given and found, for recompiling certain packages for Arch Linux and derivatives, has made things very very complicated, and often includes statements that it’s not worth it. Well, I am a witness that it is well worth it, one can increase performance quite a lot, and it’s not too complicated as of this writing.

My Arch derivative of choice is Manjaro, it does a lot of things for us. But all of these methods are pure Arch, all you have to do is get the prerequisites.

Prepare the environment

Before we do anything, we update the system and reboot. This is partly because operations further down will get new packages.

Then we install yay. Manjaro has it in its standard repos and can be installed just with pacman -S yay; it’ll be a bit more difficult under pure Arch. Once yay is in, we edit /etc/makepkg.conf, and find these lines:

CFLAGS=”
CXXFLAGS=”

Both lines will be quite long, with close-quotes, containing several items. One of the items in both is -march=; this needs to be changed from whatever it is to -march=native. We also need an item added or changed if it exists: we need -mtune=native. This will make everything we compile, run by far the best on the very make and model CPU we have in this machine. It will also make the packages not run well on anything else, fair warning :-)

In addition in this file, find a line starting with this:

#MAKEFLAGS="-j

There will be a number to the right and a close quote. Find out how many CPU cores your machine has, and add one; so if you have a dual core, you’ll add this line just below the original:

MAKEFLAGS="-j3"

This speeds up package compilation a lot, even with just two cores, and enormously more with 4 or more.

There is one more item to prepare. In this file (“~” means your home directory):

~/.gnupg/gpg.conf

you’ll want to add the following:

keyserver-options auto-key-retrieve
auto-key-locate hkp://pool.sks-keyservers.net

This eliminates the need to manually receive and approve GPG signing keys for various source files as they are downloaded.

Install an Optimized Kernel

So. Once the above is done, it’s not hard to use yay to build and install the Xanmod kernel, an excellent desktop-optimized kernel:

yay -S linux-xanmod

Yay will bring in the PKGBUILD, the file defining how the kernel source is download and package built. It quickly gives the option to edit it, and doing so is part of our procedure. As of this writing, you’ll look for one line:

_microarchitecture=0

and change this to:

_microarchitecture=22

This is according to documentation in the file itself; in this version at least, 22 equals “native”, which means the kernel will be optimized for the very specific CPU make and model in your machine. You can then save and choose the defaults for the rest of the process. It will take a while, 30 minutes often and much more on slower machines. Once the rebuild and install is done, you will notice a performance boost after booting the new kernel. Do be aware that automatic updates may override this kernel down the road; you can use grub-customizer (also available via yay) to specify which kernel you will boot.

Build glibc

After the kernel itself, by far the most used boulder of code in a Linux machine is the GNU C Library, glibc for short. So we rebuild this next.

We pull the PKGBUILD and related build scripts with yay:

yay -G glibc

And then we cd into the directory created, and light off makepkg and watch it go:

cd glibc
makepkg -s

If packages are needed for the build, install will commence, and then compilation. Compilation will take quite a while, longer even than the kernel. After it’s done, install:

sudo pacman -U *.pkg.tar.xz

and reboot to fully engage, though you may see improvement as soon as you start running or restarting programs.

Issues with many packages

There are issues which can show up with many packages.

First of all, compilation may fail. glibc is a huge package with a very large number of complications, and sometimes those complications have to do with specific versions of gcc and other items — which means if your machine is updated past those versions, you won’t compile successfully. You can either dig deep in code and/or forums to see what is going on, or just wait until the (very knowledgeable and capable, much more so than I) primary developers resolve it for all of us. Even something like the Xanmod kernel compilation may fail occasionally for the same reasons; there are quite a few more kernels available to try from yay, though each of them have different methods of setting CPU optimization, do watch for this.

Secondly, getting the versions you need. You probably should want the standard version, not the AUR (bleeding edge sometimes, archival and out of date sometimes too!) version. yay -G will tell you what it’s doing, but do be careful to not try to use outdated versions, that can break your OS if you go off the beaten path.

And thirdly, when you automatically update using pacman or GUIs, newer un-optimized versions will be autoinstalled over your optimal one. There may be ways to override this, but override is very questionable, because a very outdated package of many sorts is likely to produce crashes, especially something core like glibc or xorg-server. Better to just recompile after the update is installed. It is also helpful to choose such packages for the rarity of their updates, and glibc is one such.

Other packages to CPU-optimize

There are many other packages worth recompiling. I choose these regularly and differently according to high result/effort ratio! Here is a list, there are doubtless many more. These are all for the yay -G and makepkg method used for glibc, not yay -S. There may well be others which will help more, certainly for particular use purposes, e.g., audio and video.

gtk3
gtkmm3
gtk2
gtkmm
cairo
cairomm
qt4
qt5-base
pth
libxml2
glade
libglade
libglademm

Categories:      

==============

Sticky notes for the Windows desktop
article #750, updated 8 days ago

I have just moved to this one:

https://www.conceptworld.com/Notezilla

It handles text very well, including limited rich text, but lo and behold, handles images very well too.

Categories:      

==============

Remove Microsoft Edge Browser using Powershell
article #1293, updated 9 days ago

There is a method:

https://answers.microsoft.com/en-us/edge/forum/all/uninstall-microsoft-edge/3040dac6-cc0b-4dc3-9280-186856089ca7

Categories:      

==============

In Outlook, invitation emails go away; here's how to keep them
article #1292, updated 10 days ago

Interesting info:

https://superuser.com/questions/1051538/lost-email-after-accepting-invitation/1051542

Categories:      

==============

Install and run software isolated in Windows sandbox environment
article #1291, updated 11 days ago

Amazing.

https://www.sandboxie.com/

Categories:      

==============

Windows Updates by Boxstarter via Chocolatey
article #1289, updated 14 days ago

Chocolatey is a great way to get a huge variety of software into your Windows machine in a very consistent way. Boxstarter uses Chocolatey for large repeated OS and package setups, both virtual and hardware. Boxstarter has a great Windows update method inside. To call it all via Powershell, one can do this (make sure you’re administrative):

$PSCred = Get-Credential
Set-ExecutionPolicy Bypass -Force
iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
choco install boxstarter -y
choco install boxstarter.chocolatey -y
Install-BoxstarterPackage -PackageName Boxstarter.WindowsUpdate -Credential $PScred

The credential is a local admin to the box, it is there so the updater can run through as many reboots as necessary to get the job done. Please do be aware that this will reboot the machine immediately after setup, and will reboot it repeatedly as needed to get the machine fully up to date. It also installs a public desktop icon called “Boxstarter Shell” which probably will need to be removed.

One can copy all of the above lines into a file, e.g., “winup.ps1”, and then run “.\winup” in an administrative Powershell, it will work very nicely.

Categories:      

==============

Mount SSH-shared Folders in Windows
article #1290, updated 14 days ago

https://github.com/billziss-gh/sshfs-win

CLI command:

net use X: \\sshfs\login@hostname.fqdn

Categories: