My journey of repairing and recycling anything I put my hands on that I believe is still useful. Not just hardware, but including software with relevant content and issues in the field of Cyber Security, Vulnerability Scanning and Penetration Testing.
As some of you might know, I string racquets – predominantly badminton racquets. I was stringing a racquet yesterday for a client and noticed that the sliding action of the string gripper was a bit rough. The string gripper is the part that grabs the string and allows the tension head to pull the string to the required tension. Since the action was a bit rough, it would sometimes take a moment to release the string after I had tensioned and clamped the string.
This morning, I decided to restring one of my racquets so that I could try it out with a different string when I play badminton later today. I usually play with Babolat badminton racquets, but had bought an Apacs racquet a while ago. The Apacs Z-Slayer racquet had been pre-strung by the supplier. When I played with this racquet, I wasn’t comfortable with the play of it so had left it. I usually string my racquets with Yonex Nanogy 98 string, but after another client asked for Yonex BG-66 Force, I had tried that on one of my racquets and actually liked it. I thought I should string this Apacs racquet with this Yonex BG-66 Force string.
I started to string the racquet, then could see that the gripper was still jamming a bit. The string gripper has two horizontal roller bearings which appeared to be a little dry, i.e. lacking in lubrication. It is likely that the bearings need a bit of oil. The WISE 2086 was nearly 8 years old, and having strung 660 racquets, I think I had only ever lubricated it once – some years ago, so was probably overdue.
I held the gripper closed without a string so that I could get to most of the bearings, and put a drop of 3-in-one oil on each bearing. Then I released the gripper. It stayed closed for a moment, then released. I closed the released the gripper a number of times until I could see that it was no longer staying closed and was opening immediately.
This photo shows the gripper in the closed position, holding a string at tension. Previously, the roller bearings were not lined up neatly in a row – now they were.
When the tension is released, the gripper will release the string once the string tension is relaxed. When I release the tension, as the string starts to slacken off, the gripper should release and hit the end stop. The snap sound that is heard when it hits the end stop should be very prompt and indicates that the bearings were no longer jamming.
Now I can continue stringing this racquet and the string gripper is as good as new.
Last year, my sister had brought over to me, a Samsung Galaxy Tab S model SM-T705Y. This was a tablet that Optus had been offering for people getting a new internet connection. I also had one, from my Optus connection years ago. Anyway, her problem was that the battery didn’t seem to last very long, and she ordered a replacement battery on eBay and came over for me to help in replacing it. At that time, I helped to replace the battery, then a week later, had to do the same thing again. The battery hadn’t lasted very long, so a replacement battery was sent out again.
After this second replacement, all went well until a week or so ago. She had accidentally dropped the tablet, but this time onto a tile floor. Previous times, dropping the tablet with its silicone shock-protected case onto carpet didn’t cause any noticeable damage, but this time it had failed. The fault was that after powering up, it would then decide to turn off by itself, almost as if the battery wasn’t working. She brought me the tablet, and I did some research on the problem. It seems that others have had this problem and determined that it may be the battery connector becoming dislodged or the connector socket had disconnected from the board.
It seemed unlikely, as the battery must be connected for it to power up and the tablet appeared to charge when I connect a charger to it, so what could it be? On Saturday evening, I opened up the tablet case. After removing the back cover, I could see that the battery connector – didn’t seem to be fully plugged into the socket. There was a slight bulge where the green wires were, so I pressed on the connector with my finger nail and it seemed to give, and gave a slight click when I did that.
On powering up, it seemed to stay turned on, which was different and seemed to be working again. I took a photo afterwards to show you what it looks like.
The white connector only has clips on the sides to attach it. For the battery to work, it means that the red and black wires must be connecting. The green wires must likly be used for BSI and BTemp. BSI is Battery Status (or System) Indicator that reports the status and capacity of the battery. BTemp is usually a thermocouple that reports the temperature of the battery.
This fault would then seem to be BSI related hence the tablet turns off because it cannot determine the status of the battery. That would also explain why repeating the turn on process gives quite wide variations of the reported battery capacity. Sometimes 85%, and other times 50%, then one time it turned off and when I connected the charger, it showed 1% capacity.
After some testing, the tablet remained on, so it was time to put the tablet back together. I pressed the connector one last time, and it still gave a slight click – which is a bit concerning. I used a magnifying loupe at 10x to visually inspect the connector terminals to the board and they looked intact – no sign of a dry joint, that could be another cause.
Anyway, I put everything back together and gave instructions that if this happened again, to press on the back cover firmly above and to the left of the micro USB port. This would likely be where the battery connector is, as seen in the photo.
Why does this happen? It could be that the replacement battery is not a genuine Samsung battery, even though it seems to have the right stickers, and that the battery connector might not have been made to the same tolerances as the original would be. I did notice that the connector didn’t seem to be the same colour as the original that had been removed the previous year. The connector seems a little more translucent. Maybe I should put the connectors under a microscope and see what the difference is in the terminals. Anyway, that’s it for now.
Do you use a UPS, or maybe ask – what is a UPS? A UPS or Uninterruptible Power Supply is a mains power device that continues to supply mains power in the event of a blackout which can be an outage that happens briefly or can be for many hours. A UPS has internal batteries – sometimes a single battery or often for higher power or higher runtime, have multiple batteries in series.
UPS’s traditionally came with internal Lead-acid batteries, but in the past few years, more modern UPS’s now can have Lithium batteries. Lead-acid batteries have a limited service life – like those in cars, they need replacement from time to time. Some UPS’s have an internal self-test mechanism that can alert you to a failing battery. Others will not, and the only time you find out is when a blackout or even a brownout occurs. Your computer turns off and you hear the silence.
Over the years, I have used APC branded UPS’s. APC was a company called American Power Conversion Corporation, and now the brand has been acquired by Schneider Electric. Most of the APC UPS’s are still able to be serviced, which is one of the main reasons that I chose APC at the beginning. I used a UPS to protect my home network which comprised of a domain controller and a firewall, then gradually expanded over the years to include one NAS, then two – and eventually now, I have many UPS’s.
One UPS protects the NBN modem that enables my hybrid coaxial internet connection. Another UPS protects the Telstra Smart Modem that then makes the internet available and my home phone line. A main UPS then protects my virtual machine host, and various NAS storage etc. These are not the only ones though – my son has one for his desktop. Right now, there are 6 of them and recently a couple have given battery failure alerts.
Service life – generally speaking, we should get 3 years out of them and if lucky can be longer. The point of this was that I needed to replace the batteries in order to get the UPS’s back in operation.
My son’s UPS was a APC Back-UPS 1400VA – with a model name of BX1400U-AZ. I go to the apc.com website, choose Support, then Find Your Replacement Battery. By entering the model name, I get a choice which when chosen tells me that this UPS needs a RBC113.
I did try to get a price on this but my main supplier did not seem to have this in stock, so I decided to look for equivalent batteries. In particular, I was looking at CSB batteries. I need batteries that are designed to be used in a UPS, and I needed to find the right physical size and equivalent capacity. Eventually I managed to find out that I needed 12V 9Ah batteries that physically were the same size as the common 12V 7.2Ah batteries. The connection terminals needed to be F2 types which are 6.3mm or quarter-inch terminals.
In addition to this, I had a call from the security company saying that my alarm system was reporting a failed battery, so I would need a 12V 7.2Ah battery for that, with F1 terminals.
I also had another UPS, a APC BE700G-AZ which also needed a 12V 9Ah battery (RBC17). Then another UPS BK650MI needed a RBC4 which was a 12V 12Ah larger battery. Then checking on the UPS for my NBN and Telstra modem, which was a APC BK350EI that uses a RBC7 – a 12V 7.2Ah battery which I thought I should get one as a spare. So finally after all this, I had a shopping list for various batteries. I contacted a local supplier for CSB batteries, and was able after a short email exchange, got a quotation and placed the order.
The batteries arrived on Thursday, just before the Easter holidays began, so was an opportune time to perform the replacements. In general, the APC UPS’s are relatively easy to open and replace batteries – except the BX1400U. The front cover needed to be unclipped, and under that was three screws that needed to be removed, then the back cover after disconnecting the battery safety, had four screws. Then the case had to be cracked open – generally by using a couple of big screwdrivers to pry open the sides.
I forgot to take photos, but did find someone on Youtube that had done this, and was able to follow the steps carefully – since in my UPS, there were cables to the internal transformer that were clipped to the case. If I had followed the steps literally, I might have damaged that particular cable. The BX1400U needed two 12V 9Ah batteries that are taped together and connected in series. After installing those, I reassembled the case, and plugged the battery safety plug back in. There was a sound of a spark, when I did that, but it can be quite normal when connecting battery leads and this UPS isolates the battery by using a safety plug which is like a bladed fuse.
After doing this for each of the UPS’s – I then connected the UPS’s to the power board, turned them on left them for a few days while I monitor them. The battery failure light should now be off, and it gives sufficient time to fully charge the internal batteries before I disconnect them. Ideally when I turn off the power to the UPS, they each should beep to alert me to a power outage, but the UPS should stay turned on – which they did.
So it appears that my battery replacement has restored these UPS’s to working order. We will find out in the next power outage, I am sure. If you are not using a UPS to protect your NAS, you might want to consider doing this, as each time there is a power failure, your NAS may suffer some slight data corruption, which can get worse each time it happens.
[P.S. If the APC UPS has failed, and even replacing batteries hasn’t allowed proper operation – it is likely that the UPS has had an electronic failure. APC does have a trade in service that gives you 25% off the purchase of a new APC UPS. Also they did have a repair option for out-of-warranty UPS’s which I had used in the past for a BP1400 that protects my main servers, but I don’t seem to find that now.]
This is a follow-up on my previous article about the failed power adapter for my CCTV Camera system. I mentioned that I had replaced the failed power adapter – the one that had a burnt transformer, which was not able to be repaired. The power adapter I plugged in also had a fault which appears to be a failed output filter capacitor.
This filter capacitor is at the output of a 12V regulator, so should have removed a lot of the mains ripple voltage. It shows up as a moving line or bar in the video output from the camera.
The bar can change in height, but will move around. I had ordered a replacement power adapter which arrived in the post just a short time ago. As the ladder was already in place, it was a simple matter of swapping out my power adapter with the new one. I had ordered the power adapter with the right plug and center pin positive polarity, so it was a simple swap.
After this was done, the change in the video was quite obvious, no more lines. The picture quality isn’t great as you can see – as the cameras are analog versions, which are many years old, but will suffice for now. It looks like the time is wrong in my recorder – I will have to fix that up. That’s all for now.
It is a Sunday again, and another wet one. Actually it has been raining frequently for almost the past week and our front lawn is so water-logged that it is almost resembling a swimming pool, albeit for frogs. As I write this article, I hear that evacuation warnings are out in many areas of the state, and Sydney’s main weir – Warragamba Dam – has started spilling over, due to over-capacity. That is going to exacerbate the current flooding due to this storm.
Anyway, today’s task was to look at the CCTV camera system which had stopped working some time ago. There was no video coming from the cameras to the video recorder, so I suppose the problem is in the roof. The CCTV cameras are powered by a power adapter that connects to two cameras at a time, through the use of a splitter. This power adapter is located near the manhole in the garage. As the manhole is quite high, I needed an extension ladder to get to it. Since we don’t use such a ladder all the time, we swap the ladder between my sister and I as needed. Anyway, that is another story.
Since the ladder was now here, I placed it up through the manhole and went up to see what was going on. The power adapter for the cameras was in the roof cavity and when I saw it, I knew that it was unlikely to be repairable. I brought it down, and scrummaged around in the my spares box for a similar power adapter and plugged that one in. The cameras were now working but for a horizontal stripe, which moves up and down – meaning that there is some ripple in the power adapter DC output that was getting into the video. But at least the cameras were working now.
It wasn’t my camera taking this picture that was wonky, but the power adapter itself was misshapen. It should be a rectangular object, but it seemed to have been exposed to a lot of heat. We have had some hot days during our summer, but there was also a bulge at the top.
This bulge isn’t normal, but may be indicative of why the power adapter had failed. I decided to open it up and see what had caused it to fail like this. The plastic was brittle, an indication of either old age or heat. As the power adapter wasn’t more than a few years old, I suspected that old age would not be a factor, so heat must be.
After using a rubber mallet, a couple of large screwdrivers, and a pair of pliers, the case started breaking up (or splintering actually) into small pieces and finally I had it open.
That burn mark stood out, and it was where the case was bulged out. This is quite unusual, so I turned it around to take a better look.
One of the primary leads had burnt off from the transformer, so I surmise that the primary side of the transformer had developed a short circuit where the lead was connected, and caused it to overheat. The exhaust gases from the short circuit would have ballooned the case out, and eventually got hot enough to cause the case to shrink and become misshapen. This would have continued until the lead melted or burnt off from the transformer.
Fortunately, it didn’t catch on fire. Anyway, this burnt transformer confirms my initial suspicion that this power adapter is a write-off therefore repair is out of the question. I don’t really want to take the transformer apart by removing the laminations, remove the insulation, then remove and rewind the primary winding, add new insulation and reinstall the laminations – which is what it take to repair this transformer. Of course, I don’t need a CCTV camera system to tell me how miserable it looks outside on a day like this, but at least I am getting some video that tells me that the cameras need cleaning. Over time, condensation can build up on the inside of the housing, which makes the view appear foggy. I think I will have to wait for the rain to stop before I try getting to the cameras.
I had a look on eBay for replacement power adapters. It appears that 12V 1A power adapters are much cheaper than 12V 300mA power adapters, so will order a 12V 1A version. I guess I will live with the horizontal stripes until the replacement power adapter arrives. Maybe it is time to install a new NVR camera system than the current analog CCTV system.
Last Sunday was a bit of a wet day, so it was an opportune time to spend on tidying up. I saw the ReadyNAS sitting there with the repaired power supply. I really should get it going again especially since I have two brand new 4TB hard disk drives to go into it. As previously mentioned, there were two hard disks still installed in this NAS and I don’t know what is on them, nor should I disclose what is on them even if I did know. Great, I can use this NAS for an exercise.
Hypothetical scenario – what if you are the IT support for a company, and you have been asked to get a NAS running on the network, because they had some data stored on it years ago (like 2016), and it had been powered off and stored in a cabinet in case it was needed again. Whew! That was a long sentence – what could go wrong, you might ask?
More information – in the past 5 years, there was a revamp of the company network. New storage servers were put in, and the last person who used the NAS, was that IT manager who left 4 years ago. Does this sound similar? I know, it happens a lot, and actually – in November 2019, I had a similar situation for Wyse thin clients for a company in the same predicament. Anyway, where was I? Ok, getting access to the NAS, get it on the network – bosses waiting for the data that is on it.
This is hypothetical, remember? Where do we start? Yes, I should make a checklist.
Check if the disk drives work
Work out what the partitions are
Determine the IP address of the NAS
Find out the method for credentials, and replace with known credentials
Do this without destroying all the data
Put it on a simulated network and log in to the NAS
Tick all the above steps and we are done
Step 1 – check whether the hard disk drives still work – I think they do, because I did power up the NAS, and I didn’t hear the dreaded clicking sounds that usually come from a failing disk drive. At the time, I could hear both of the disk drives spin up. So tick off that the disk drives appear to be working – at least physically.
Step 2 – check whether the disks are accessible. One way to do this is to take the disk drives out, and insert them into another computer – preferably running a form of Linux. I have a data recovery machine that has a five disk drive hot-swap chassis, so that is what I used.
I worked out that the two disks are part of a raid set, with two raid 1 partitions and a raid 5 partition – although with two disk drives, how does that fit with a raid 5? No matter – it is what it is. The first partition is a 4GB partition, with the second being 512MB. The raid 5 partition is the remainder of the 500GB disk capacity – which is quite likely the data partition. One way of getting access to the data is just to mount one of the disks and read the data from it, but that would be too easy and not what the bosses wanted.
Step 3 – what is the IP address? I need to access the control partition to do this, which would be the 4GB one. I put both drives into my data recovery machine, then booted into Ubuntu. Ubuntu on boot up checked the disk drives and presented me with a number of raid devices, namely /dev/md0, /dev/md1 and /dev/md2. /dev/md0 was 4GB, so that is what I need to look at. I tried to mount /dev/md0 but my Ubuntu complained that the raid volume was read-only. Since md0 is only 4GB, I could just make a copy of it. I did this using the command:
sudo dd if=/dev/md0 of=first.dd
Next I will setup a loopback device and mount it that way – the next commands should do the trick:
sudo losetup -f -P ./first.dd
losetup -l showed me that the loopback device was /dev/loop7, so next step is to mount the loopback device
sudo mount /dev/loop7 /mnt/mydisk
mydisk is the mountpoint folder that I had created earlier. So using a file explorer, I went to /mnt/mydisk and found it to be a linux folder structure – which I had suspected already. Looking at the /mnt/mydisk/etc/hosts file contents told me that the ip address was 10.0.0.129 – IP address found, ticked.
Step 4 – knowing that the NAS operating system would be Linux-based, I know to look for the /etc/passwd and /etc/shadow files. Essentially the shadow file told me that there were only two passwords used, one for root and one for admin. I don’t know what the passwords are, but I can tell that it is encrypted using MD5. The password hash starts with $1 and contains a salt. If it was unsalted, it would be an easy matter to brute force the MD5 hash and obtain a password, but for this exercise – I only need to replace it with the password hash with the hashes of known passwords.
To create two password hashes, I run the command “openssl passwd -1” twice, and enter the password “12345” twice for the first hash that I will use for root, and then use the password “admin” for the second hash, for admin, right? I now have two new hashes, and the hashes are similar to the existing hashes in the shadow file with the same number of characters. All I need to do now is to replace the existing hashes, simple!
Actually, not quite that simple. Since I couldn’t mount the raid volume, I can’t write to it as such. But, I could write to the raw disk drives themselves. All I need to do is to find out where that file is stored on the partition.
The above command tells me that the inode is 60290, but I need the block number. The stat command can tell me, so I run this command:
and voila! I get “blocks: (0):981000” in addition to the inode and other assorted info. Now I know the block number 981000 is where the shadow file is stored on the disk. Remembering that the first block of the partition is reserved – which we will call block 0, then what I want is actually the block after block 981000. Now that I have the block number, what is the blocksize?
Good, blockdev tells me that the blocksize for the loop7 device is 4096. Now I should be able to extract the required block containing the shadow file contents from the disk image I took earlier by doing:
which means that I want to skip the first 981000 blocks of size 4096 and just copy 1 block – agree? When I look at shadowblock, it does contain the shadow file padded with null characters to the end of the 4096 character file. Fantastic, I am on track – now I need to find where this is on the actual disk partition. I know that the first partition is what I need, which for the first disk drive would be /dev/sdc1, but this is a software raid volume, so it wouldn’t be in exactly the same location. Anyway, to keep this a bit simpler, I found that there is a 1MB header (that also contains the superblock for the raid array) in front of the actual partition data, so 1MB is 256x 4k blocks, so I just need this command to read from the raw disk partition:
I do this and then use hexdiff to compare the two files, shadowblock and rawshadowblock – and the contents match up exactly. This confirms that I know where the shadow file is stored in the raw partition. How did I get the number 981256? It was 981000 and I added 256 for the header, ending up with 981256.
Still on Step 4 – this is a long one. Now that I know where to write the block, I just need to create the new shadow block. I copied the shadow file from the /etc folder, then I replaced the root and admin password hashes with the ones I generated earlier. Now the modified shadow file looks like this:
This modified file is identical in length with the original, 760 bytes. I will then copy the file to be modshadowblock, then make it 4096 bytes in length by padding with null or zero bytes. To do the padding, I run the command:
[Hint: 3336 is 4096-760] I run hexdiff to compare this modshadowblock file with either shadowblock or rawshadowblock, and confirmed that the only differences I can see are where I replaced the hashes for the two accounts.
Step 5 – Now that I am happy that I have the replacement block (modshadowblock) that contains the new shadow file, it is the time to write it back to the physical partition. I will need to do this for both of the disk drives, since it is a software raid, both disks must match otherwise the raid will be in error.
As I am writing to the physical disk, I need to use sudo, and if I haven’t used sudo for a time, I am asked for the sudo password. In the above, only 4096 bytes were written to each disk. In case you are wondering, why I use skip sometimes and seek sometimes, just remember that skip relates to skipping blocks at the input file, and seek is to move the write pointer forward in the output block device. If seek is used in a non-block device or file, it will prefix with zero bytes, so be careful when using this. The count is the number of blocks to write, and bs is the size of the block or blocksize. The default blocksize is 512 which is a standard disk sector size, hence it is always best to specify it. You might have noticed when doing the earlier padding, that I set the block size to 1 byte.
I think it is time to put the changed shadow file to the test. Hopefully the operating system doesn’t perform an integrity check of the shadow file, which would detect that the file has been tampered with! In this case I modified the contents of the file by writing directly to disk, so as far as the operating system is concerned, the timestamp of the file has not changed. But, if the operating system keeps a hash of the file contents somewhere, then that is something I need to find. Anyway, let’s see if I am successful.
Step 6 – to put the disk drives back in the NAS, power it up and connect to it via the network. I connected my laptop directly to the NAS and set an ip address in the same ip range as the NAS. I used 10.0.0.1 since it is often used as the gateway. Then opened a browser and went to https://10.0.0.129/ – accepted the security prompt, then a window popped up asking for username and password. Once I entered admin and admin (which was the password hash that I generated), I got a shares screen which didn’t look like an admin screen. After a moment of thinking about it, I then went to https://10.0.0.129/admin/ and got the correct admin screen. After clicking on Network then Interfaces, I saw this:
Success, no integrity check – I didn’t think so, but newer operating systems are likely to incorporate this feature. I have now restored administration access to the ReadyNAS. As the network has changed, I would also need to change the IPv4 assignment to use an address from a DHCP server, so then it will connect to the new company network. Also the security settings seem to have taken user accounts from an old Windows domain that no longer exists. I need to reset the Security policy to use local users which will allow local user creation. Then the created local user will have access to the shares, of which there was only one – called “files”.
Step 7 – tick all the boxes. I think this was a success, obtained access to the NAS, was able to create a local user account and gain access to the Windows share. This concludes my hypothetical scenario as access to the NAS administration interface was available, with the NAS connected to the network and finally an account created to access the shares, thereby the hypothetical company requirements are fulfilled.
Summary – people don’t always think about what might happen when items such as this are discarded. The NAS could well have contained proprietary or confidential information, which could easily be accessed by connecting the disk drives to another computer just like what I had done. One company that I know of – took data security more seriously and only discarded disk drives after something was done to them to prevent them from being used. In this particular case, it was to drill a half-inch hole through the entire disk drive ensuring that it goes through the platters.
Where did I put that holey disk drive that I kept after it was partially destroyed – if I find it, I will put in a photo of the damage. Anyway, I hope you enjoyed this. My next step would be, to take those disk drives out of the NAS, put them into my Ubuntu machine and perform a security erase on each disk (have I made an article about erasing? Let me check). In the meantime, I could put the NAS to use, by installing the two new 4TB disk drives in, and then will have another network storage device on my network. We can never have too much network storage – that is, until we need to find a file and don’t know where we put it! Have a good day!
[P.S. I wonder if there is a bitcoin wallet on the NAS – with the value of BTC at AUD73k as I write this, maybe I should check – just in case, before I erase the disk drives. Hmm, just a thought!]
As you can imagine, I quite often get things given to me on the off chance that it may work out to be repairable or maybe used for parts. These things are usually checked when it arrived to be catalogued as either working, non-working, repairable or non-repairable, etc. This NetGear ReadyNAS Pro 2 bay NAS was in the same boat. It was given to me by a friend who said they have a couple of old NAS’s – am I interested in them? I said yes at the time and didn’t think much about it until one day I happened to arrange to meet my friend as we were going to the same place and I was giving him a lift. He gave me the NAS’s at that time.
When I got home, I inspected them – to see what I have received. One had a power supply, the other did not. I decided to power up the NAS with the power supply and it didn’t power up. The light indicator on the power supply was not lighting up, so essentially bad power supply or is it? As luck would have it, I have another of these identical NAS’s – also from my friend but years earlier. I took the power supply from that one, and tried the NAS and it powered up – thereby confirming that the power supply was at fault.
Then it was left for a while and one day, I had a look to see if I can get the power supply open. It is one of those that are in two halves, and seem to be glued or ultrasonically welded. I might have mentioned before that a rubber mallet can be useful in breaking the glued seals and since it wasn’t working, I couldn’t really damage it any more by pounding on it with a rubber headed mallet. In due course, the seals gave way and I was able to prise the case apart. As soon as it was open, I could see the problem – blown electrolytic capacitors on the output.
Usually, they will just have a shiny top, not bulged and blackened like these ones were. I was able to read the values of the electrolytics, which were 1500uF 16V and 1000uF 16V. These were also the high temperature 105 degree versions which are common for power supplies in enclosed tight fitting containers. I checked on ebay, and found a suitable supplier and ordered some. I didn’t bother to match up the original specifications since the replacements should be much better than the original faulty ones. If this was a precision repair, then it would make sense to do this.
This was placed aside and only this morning, I was wondering, where were the capacitors I ordered over a month ago. I checked the delivery status, and it just showed in transit. Then this afternoon, my wife brought in the mail, and in that was a brown padded envelope that looked like it might have a couple of packets of capacitors stuffed into it. It was, and I checked the values and they were what I ordered – a bonus sometimes. As I was still busy doing some PowerShell scripting, I had to leave that until the evening, after dinner had been squared away – then I had some time.
Out came a desoldering station – it was having a bit of a hard time melting the solder since the solder used in the power supply was lead-free. Also due to the reflow soldering method, the joints don’t have an excess of solder and lead-free solder has a higher melting point. To help it along, I tinned each capacitor joint with leaded solder – and then was able to desolder each solder joint and then with a slight wiggle, both capacitors came off without any further trouble. It was an easy exercise to match up the polarity of the capacitors – for the replacements to fit back, with the correct orientation. The white stripe on the capacitors indicates the negative pin (not always white – my replacements had a yellow stripe since the printing was in yellow, but you know what I mean).
Here they are shown soldered in place with the old faulty ones on the side. Next step to put the case back on, and close it up – without sealing it up permanently. Power on, and I could see the led indicator lit up, so it should be ok to attach the NAS.
You can see the NAS powered up, and I am holding the power supply at an angle so I could capture the lit indicator. So there it is – I ordered $7.67 worth of capacitors, and used approximately $0.77 of the order, so I have a lot of spares in case I need them in the future. To buy them individually might have cost me about $4 anyway, so this way, I get extras. With this repair, I will leave it running as it is for a day or so, then seal up the power supply. Have a good evening!
I didn’t mention, but the NAS did have two hard drives still installed inside of it – I wonder what’s on them…
To recap, the Arachni Web user interface would not allow logons, and I was able to resolve it by making a change to the bcrypt engine.rb file as shown here:
The change was quite simple, but took some investigation to determine where the fix should be applied, so by adding [0…60] to the line, as seen highlighted, the Arachni web user interface was able to logon the user. You will recall that it did this, but then I found that when I started a scan, that it would just time out after 10 minutes.
I didn’t have time to look into that problem but did mention that it would have to be left for another day. It looks like that day is today. Previous investigations showed that by looking at the ruby files, that a binary file called phantomjs was having the problem. Usually just running the command “./phantomjs –version” should report the version number, but instead I get the following:
I installed strace on my kali linux machine by doing “sudo apt-get install strace”. Then ran strace with phantomjs to see what the heck was going on. strace justs reports to stderr, so to capture the output I needed to run “strace ./phantomjs 2> trace.txt”. strace generates a lot of output but going through the trace.txt file showed me the thing that seemed to match with the phantomjs error that was seen above.
The “No such file or directory” indicates that it is looking a file, namely libssl_conf.so and it is searching the common folders that it expects to find it. Now a search of Google indicates that a reported solution is to provide an environment variable that points to a known location of the ssl configuration files – to do this I would need to export OPENSSL_CONF=”/etc/ssl” and then try to run phantomjs again.
After doing this, I was able to get the version number of phantomjs – reported as 2.1.1 – fantastic! By making this fix, it works for me, but some reports on Google indicate that it doesn’t work for everyone. In order for the Arachni Web user interface to work, I would need to do this same thing when Arachni Web starts up, so I decided to make the change in the shell script that starts the user interface.
In the file, ~/Downloads/arachni-1.5.1-0.5.12/bin/arachni_web – just after the existing export lines, add the line that is highlighted. This will ensure that when the web user interface is started, that it gets the environment that phantomjs needs in order to run properly.
To summarise, I downloaded and extracted the arachni-1.5.1-0.5.12-linux-x86_64.tar.gz file from Arachni Downloads for my Kali Linux 2020.4 machine. One more thing actually, to extract it, I used the command “tar zxvf arachni-1.5.1-0.5.12-linux-x86_64.tar.gz”. I remember that I had some problems using the Engrampa Archive Manager, and that for some reason it did not extract all the files. I think that phantomjs was one of the files that wasn’t extracted, but would have to check it again, hence why I used the tar command line to do this extraction. Now, I wonder whether this might explain the problem that others have reported that this fix doesn’t work for them – hmm.
Once the files are extracted, I then modified the engine.rb file so that the user logons would work. Then modify the arachni_web bash file to include the export, and then everything seems to work for me. The scan of my Small Business Server 2003 machine shows this:
Not really surprising since security updates for SBS 2003 are long gone. That’s it for now – have a good day!
On Saturday, being a wet day – we were not inclined to leave the house and brave the weather. As I had some spare time, I decided to do something about my Dell Inspiron 15 7580 laptop that had been bugging me for a while. The problem was that the fan would come on, and sometime would be very loud. It seems that the cpu gets quite hot and then the fan starts spinning up at high speed. A friend mentioned that the thermal grease under the cpu and gpu heatsinks might be a problem and that he had replaced the thermal compound one of his friend’s laptop which seemed to improve it a little.
I decided to do the same – but had been hesitant about it doing it previously as the laptop is still under warranty. Anyway, a wet afternoon, and spare time – so we try to fix something that ain’t broke – at least not completely. Removing the base cover is quite straight forward, and Dell has a good website where you put in your Dell Service Tag, and it comes up with options for you to choose. Under documentation, I could find the Inspiron 7580 Service Manual which you can view as html or download as a pdf document.
So following the instructions, undo six captive screws, remove four other screws then pry near the hinge and the base cover pops right off. To replace it was also quite easy – get the front edge lined up and press it together, but for now I need to get to the heat sink, and the manual is very helpful.
Once the heatsink came off, I could see that the thermal compound was probably a bit old, and flaking – some of it was still soft, so maybe it might have been ok.
However, since I had it off, I needed to clean the compound from both the heatsink and the cpu and the gpu. I use an Arctic Silver ArtiClean Thermal Compound Remover kit. The kit comprises of two small dropper bottles containing the Thermal Compound Remover and a Thermal Surface Purifier. The remover was almost empty so used the remaining drops to clean up the surfaces, then used the purifier to essentially finish the cleaning. The remover would dissolve any of the old grease after I had used a plastic scraper to remove the bulk of the material.
Then to repaste it, I used Cooler Master MasterFel Maker Nano Thermal Paste – which I had bought a year or two ago. It was not too expensive and the reviews indicated that it was quite effective and almost always better than what is supplied during manufacture of the laptop. A great tip is that after the heatsink is clean, put some thermal paste on it and rub it in, leaving a very fine film. It supposedly makes a better thermal contact with the grease on the heatsink – which will have a small layer on it. Follow the instructions for your thermal paste if you need to do this.
After the laptop was back together – did it work? Well it has only been a day – but it seems to have improved, certainly the loud fan noise is not so noisy, but still audible. Using CPUID HWMonitor, I can see that when idle, the CPU is down to 41oC – I don’t remember ever seeing it down so low. But it will still take some time for the thermal compound/grease to bed in. I guess that I will check it after a week and see how it went, but for now, the repaste appears to be successful.
Oh – here is a photo of the GPU and CPU with the heatsink removed and sitting nearby.
I was doing some cleanup in my house late last year and I came across a computer mouse that belonged to my son. It was a Logitech G700 Wireless Mouse that used a rechargeable NiMH battery. I asked him about it and he said that it was faulty – the left click would sometimes not click and/or would release even when pressed. He had replaced it with another mouse, so if I wanted to use it – I was welcome to it. As luck would have it, my Logitech wired mouse was also having some problems. Every so often, I would hear the beep tones that my mouse had disconnected on the USB and then would reconnect a moment later. It was strange as I could still see the mouse tracking led staying lit while this disconnect happened.
I tried the G700 mouse, and it had a really good feel. Yes, I could see that the left mouse button definitely had a problem. I looked on Google and could easily find people who had the exact same problem. As my son is an avid gamer, the left mouse button most likely had experienced a full life, such that the internal springs were weakened, which leads to this problem.
To open the mouse, I needed to remove the teflon glide pads to access the five tiny screws. It opens up easily and I could read the switch model numbers. I saw a D2FC-F-7N(10M) which was made by Omron. The 10M is probably the quality – 10 million operations. I checked on eBay and found a lot of people selling the D2FC-F-7N but no designator. Anyway, I wanted to replace both switches anyway, and the eBay pricing is such that the price for 10 is not much more than for 2, so I ordered 10 of them.
As expected, the delivery time from China is varied, and they arrived a week ago. I had a spare couple of hours this evening, so decided to get the job done. I opened the mouse again, and removed the mouse wheel, and then the metal bracket and plastic fittings that had held the mouse wheel in place.
As is my practice, I place the screws and bits and pieces in a sort of order, so that I can put these parts back together in the proper order.
This top board needs to be removed – so that I can get to the switches – those black rectangular blocks with the white switch activator on the top. In order to do this, I will have to desolder 14 pins that can be seen at the bottom right of the top board. Let’s find out if my desoldering station is still working.
Yes, success – after some minutes, the top board was loose.
Here it is a closeup, then let’s see the bottom of the board.
Those large pads should be an easy job for my desoldering station.
Sure enough, the switches are off, and I now put the new switches nearby with the correct orientation, ready to go in.
Then soldering the switches were another easy job. Next step is to put the top board back on.
When the top board is in place, I will solder the 14 pins.
Ok – after an inspection through a magnifying glass, I think the job is done. Well, almost.
The mouse wheel supports are fitted now. You might see two tiny springs at the top, next to the black screws.
The mouse wheel is back in place. Now – where did I put the top cover?
There it is! I just need to connect that orange plug into the white socket, then close the cover, install the screws, and put the teflon glide pads back on – and the repair job will be completed.
The acid test – does it work? Plug the wireless mouse receiver into my laptop – put the battery back in, and switch the mouse on. Yes, it works – both buttons work as they should. Some of you might ask – if it was only the left mouse button that was failing, why replace both? The answer is that if one switch is failing, the other is not far behind. Another answer is that I have gone to the trouble of opening it up, so while I am there, I should replace the other switch at the same time. Agree? It is getting late, so bedtime for me – bye for now.