Short version:
Solution: Start cygwin installation with “–no-admin”
Longer version:
Continue reading “Cygwin Portable (Without admin rights)”
FATAL Fatal error during KafkaServer startup, NumberFormatException
This might have taken me a long time to figure out, but fortunately my super awesome colleague (nicknamed Mr.T, he also pities fools) showed me the solution.
We ran into a Kafka Broker which was not starting, and gave the exception that is pasted in the bottom of this post.
FATAL Fatal error during KafkaServer startup. [..] java.lang.NumberFormatException
The solution is that the string (in this example “hs_err_pid19313”) is actually an error log which exists in a topic partition directory. (Re)move this file and Kafka will start without a problem.
(Tip: use find and grep to quickly find the file, go to your Kafka storage directory and run the following command;
find . |grep hs_err_pid19313
Continue reading “FATAL Fatal error during KafkaServer startup, NumberFormatException”
Citrix Receiver on Linux: SSL Error 61 ("You have not chosen to trust")
Important:
If you don’t know or understand certificates / root and intermediate certificate authorities, get someone who understands to follow below instructions.
I tried connecting to the company’s citrix server, but kept hitting the same error when I tried to open the connection:
Contact your help desk with the following information: You have not chosen to trust "INSERT YOUR CA HERE", the issuer of the server's security certificate (SSL Error 61)
It seems that Citrix has an alternate directory where it stores it’s trusted cert’s / certificate authorities. Even though you can see that the server’s certificate is trusted (by root CA’s) via a web browser, we need to copy those to the correct directory.
In short: Copy the root and intermediate CA’s to this directory: /opt/Citrix/ICAClient/keystore/cacerts
Continue reading “Citrix Receiver on Linux: SSL Error 61 ("You have not chosen to trust")”
Although GNOME Shell integration extension is running, native host connector is not detected
This is a bit of a nuisance, after a fresh install of Ubuntu Gnome, I was not able to install extensions from extensions.gnome.org.
Firefox asked me if I’d like to install the extension but after a Firefox restart I still wasn’t able to install any plugins.
To be precise; this message was shown:
Although GNOME Shell integration extension is running, native host connector is not detected. Refer documentation for instructions about installing connector.
The solution was to install the chrome-gnome-shell package;
sudo apt-get install chrome-gnome-shell
This fixes the message from both chrome and firefox.
This ZooKeeper instance is not currently serving requests
When one of your zookeeper nodes is sending you this message, that means that your Zookeeper cluster hasn’t started in the right order.
Solution: Restart your cluster (node per node), starting from node 1 (as stated in zoo.conf)
This problem is easy to diagnose. When the order was wrong you will get this output:
[myserver:myuser] ~: echo stat | nc localhost 2181 This ZooKeeper instance is not currently serving requests
After you’ve restarted all nodes (in the correct order), you will get this output:
[myserver:myuser] ~: echo stat | nc localhost 2181 |grep Mode Mode: follower [myserver:myuser] ~: echo stat | nc localhost 2181 |grep Mode Mode: leader
Hope this will help you out!
Calibre will not open, Gdk-Warning, drawable is not a native X11 window
It’s been a while since I’ve used Calibre to manage my Kindle, but today I wanted to transfer some PDF’s.
Unfortunately Calibre stopped working as soon as I tried to open a dialog window.
As it seems, Fedora has adopted a new display server, called Wayland. Since Calibre has dependencies in the previous adoption X-Server, Calibre won’t start.
In my case, the solution was to set a different GDK backend, before starting Calibre.
Solution:
Open a terminal and enter the following command:
GDK_BACKEND=x11 calibre
Why no SSL!? Port is open!
Okay, this has taken me too long to not post.. So here it is..:
When your firewall is blocking SSL traffic but allowing HTTP traffic, openssl s_client will show this:
my_host:joris [/etc/stores] openssl s_client -host external_host -port 12345 CONNECTED(00000003) write:errno=104 --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 247 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE ---
Who or what is nwtraders.msft?
I was searching for this answer and couldn’t find it quickly, thus decided to create this post. I keep running into the nwtraders.msft hostnames because I’m using CentOS images in Vagrant.., to be precise; the london.nwtraders.msft hostname..
NWTraders is a fictional company, created by Microsoft to showcase Microsoft Access.
Continue reading “Who or what is nwtraders.msft?”
Exclude grep itself from ps
This is so simple it’s just great 🙂
Solution: use regex in your grep so the grep itself doesn’t show up in the results.
Example:
[vagrant@london kafka]$ ps aux |grep kafka vagrant 5172 0.8 30.3 3178252 309428 ? Sl 07:00 0:06 java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Xloggc:/var/log/kafka/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/var/log/kafka -Dlog4j.configuration=file:/etc/kafka/log4j.properties -cp :/usr/bin/../share/java/kafka/*:/usr/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* io.confluent.support.metrics.SupportedKafka /vagrant/config/kafka0.properties vagrant 5824 0.0 0.0 103316 836 pts/0 R+ 07:13 0:00 grep kafka <<-- Oh no! [vagrant@london kafka]$ [vagrant@london kafka]$ ps aux |grep [k]afka root 5172 0.8 29.6 3178252 302472 ? Sl 07:00 0:04 java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Xloggc:/var/log/kafka/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/var/log/kafka -Dlog4j.configuration=file:/etc/kafka/log4j.properties -cp :/usr/bin/../share/java/kafka/*:/usr/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* io.confluent.support.metrics.SupportedKafka /vagrant/config/kafka0.properties [vagrant@london kafka]$
Using SSH to forward the same local port to multiple external hosts
Okay, this is kinda awesome :-), I got my geek on 🙂
My application is connecting to a cluster of external servers but my application can configure hostname but can’t configure port.
So I wanted to connect to a remote cluster using SSH tunneling, but I was unable to forward everything because the port binding to localhost (127.0.0.1) can only be used once.
Then I saw that you can use multiple loopback addresses! See this page: https://en.wikipedia.org/wiki/Loopback
Basically you can bind the portforward to 127.0.0.2, 127.0.0.3 till 127.255.255.254, that should provide enough addresses, right!? 🙂
So I can use multiple port forwards from my localhost(s) to the six remote hosts like this:
ssh somedomain.com \ -L 127.0.0.1:9042:external-node1.somedomain.com:9042 \ -L 127.0.0.2:9042:external-node2.somedomain.com:9042 \ -L 127.0.0.3:9042:external-node3.somedomain.com:9042 \ -L 127.0.0.4:9042:external-node4.somedomain.com:9042 \ -L 127.0.0.5:9042:external-node5.somedomain.com:9042 \ -L 127.0.0.6:9042:external-node6.somedomain.com:9042