Watch files and execute command upon change

Find yourself executing the same command over and over again after applying changes to certain files? Pywatch will be you best friend!

Meet pywatch: a cool little app that watches directories and files. Whenever it finds a file that changed, it executes the command you provided.

TL;DR

As an example; I use this to build a Docker image whenever I save a change to my Dockerfile.

pywatch "docker build . -t pauledenburg/behat" Dockerfile

Or execute tests whenever I make a change to one of the sourcefiles.

commandToExecute='docker exec -i hangman_app_1 behat -c tests/behat/behat.yml'
find ./tests -name "*.php" -o -name "*.feature" \
  | xargs pywatch "$commandToExecute"

This keeps an eye on all *.php and *.feature files under ./tests.

When one of these files changes, it executes $commandToExecute which resolves to executing behat in a Docker container.

Install

Download the pywatch app from github: https://github.com/cmheisel/pywatch.

Then unzip and install with python.

unzip pywatch-master.zip
cd pywatch-master
sudo python setup.py install

Advanced usage

Nice one: run tests when files change and create a Mac notifier whenever the tests fail.

This way you can keep the tests running in the background and you’ll be notified whenever a test failed.

find src tests -name "*.php" -o -name "*.feature" \
  | xargs pywatch "./dev test phpunit" \
  | grep "([0-9]* failed)" \
  | sed -e 's/.*(\([0-9]* failed\)).*/\1/' \
  | while read failure; 
    do 
      terminal-notifier -message "Test output: $failure" -title "Tests Failed!"
    done

 

 

Gitlab CI upload artifact fails: too large

Today I wanted to add a package-job to my Gitlab CI as instructed in this nice Gitlab tutorial.

I created the tar-file but when it came to uploading it failed with Request entity too large.

(...)
ERROR: Uploading artifacts to coordinator... too large archive  id=243 responseStatus=413 Request Entity Too Large status=413 Request Entity Too Large token=JYszbA9F
FATAL: Too large                                   
ERROR: Job failed: exit status 1

It took me some digging, but this is how I fixed this (note, the Nginx proxy was the one giving me a hard time).

Step 1: Set the maximum artifacts size

In your gitlab, go to Settings > Continuous Integration and Deployment > Maximum artifacts size (MB) and set it to the desired value. The default is 100MB.

Step 2: Set the nginx upload size

In the gitlab.rb file, mine at /etc/gitlab/gitlab.rb, set or uncomment the following line.

nginx['client_max_body_size'] = '250m'

 

 

And reconfigure gitlab to get this to work.

gitlab-ctl reconfigure

Step 3: (optional) update your proxy(!)

I run gitlab on docker containers. On the server, I run nginx as a proxy to redirect requests for gitlab to these containers.

I failed to update the proxy configuration to allow the POST-ing of data to the containers.

As I use nginx, this is the line I added. For Apache, just google and you’ll find your answer.

client_max_body_size 0;

This will set no limits on clients sending data.

For reference, this is my whole nginx vhost file.

server {
    listen 80;
    server_name git.pauledenburg.com;
    client_max_body_size 0;

    location / {
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Don’t forget to reload nginx.

$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

$ sudo service nginx reload

 

Set up NGINX as a proxy for your Docker containers

Recently I’m a fan of serving docker containers over serving Virtual Hosts using a webserver.

In order to use regular domainnames without ports, I set up Nginx to receive the request on the domainname and let it forward the request to the relevant Docker container on the specific port it is running on.

Example

Imagine I have a Docker webserver-container hosting my app. It runs on my server exposing port 8080. I use the URL app.pauledenburg.com.

I don’t want people to use http://app.pauledenburg.com:8080 but just the URL without the port

http://app.pauledenburg.com

 .

I use nginx for this:

server {
    listen 80;
    server_name app.pauledenburg.com;

    location / {
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

And now add SSL to it 🙂

Complete ELK-stack example with Docker

I wanted a quick setup for an Elasticsearch Logstach and Kibana (ELK-)stack to work with. But searching on the internet gave me too many long-winded not really working examples.

That’s why I created this page. Use it to quickly get up-and-running with an ELK-stack of your own.

Create the file docker-compose.yml

# file: docker-compose.yml
version: "3"

services:
  elk:
    image: sebp/elk
    ports:
      - "5601:5601"
      - "9200:9200"
      - "5044:5044"
    environment:
      - MAX_MAP_COUNT=262145
      - ELASTICSEARCH_START=1
      - LOGSTASH_START=1
      - KIBANA_START=1
      - TZ="Europe/Amsterdam"
    volumes:
      - elk-data:/var/lib/elasticsearch

volumes:
  elk-data:

Now start up with docker-compose up -d. That’s it!

5601: endpoint for Kibana
9200: endpoint for elastic search

Add some security

Don’t leave your elastic-search open for everyone.

Add some basic security by adding a .htpasswd config to your webserver.

$ sudo sh -c "echo -n 'myelasticuser:' >> /etc/nginx/.htpasswd"
$ sudo sh -c "openssl passwd -apr1 >> /etc/nginx/.htpasswd"
Password:
Verifying - Password:

Add it to your webserver, like nginx.

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    root /var/www/html;
    index index.html index.htm;

    server_name localhost;

    location / {
        try_files $uri $uri/ =404;
        auth_basic "Restricted Content";
        auth_basic_user_file /etc/nginx/.htpasswd;
    }
}

Reload nginx.

$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

$ sudo service nginx reload

Some notes

I chose the Docker image of sebp because he’s got great documentation. Go check it out!

Especially the part with the Frequently Encountered Issues.

There, you’ll see that you’ll:

  • need 4GB of memory for the Docker container
  • need to set the amount of virtual memory on linux by setting the max map count:sudo sysctl -w vm.max_map_count=262144

Free SSL certificates with LetsEncrypt

Getting your website on https can be done in a matter of minutes. So there is no excuse anymore to go without it. Not even on your test and dev websites.

As this example is on CentOS, it really goes for any other linux distro.

Excellent, tailor-made instructions per webserver and OS are found on the website of Certbot:
https://certbot.eff.org/

Here, a short recap of that for my own archive.

You’ll need the repel repository for this. After that, install the certbot software.

$ sudo yum install epel-release
$ sudo yum install certbot-nginx

 

Getting your website secured with SSL is now as simple as answering some questions on the following command.

Note: I’m using a method which takes a bit of downtime because LetsEncrypt is in the middle of an update. Read all about it

$ sudo certbot --authenticator standalone --installer nginx --pre-hook "service nginx stop" --post-hook "service nginx start"

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer nginx
 
Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: yoursite.pauledenburg.com
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 2


Running pre-hook command: service nginx stop
Error output from service:
Redirecting to /bin/systemctl stop nginx.service
 
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for es.git.innospense.com
Waiting for verification...
Cleaning up challenges
Running post-hook command: service nginx start
Error output from service:
Redirecting to /bin/systemctl start nginx.service
 
Deployed Certificate to VirtualHost /etc/nginx/sites-enabled/yoursite.pauledenburg.com.conf for set(['yoursite.pauledenburg.com'])
 
Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/yoursite.pauledenburg.com.conf
 
-------------------------------------------------------------------------------
Congratulations! You have successfully enabled https://yoursite.pauledenburg.com
 
You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=yoursite.pauledenburg.com
-------------------------------------------------------------------------------
 
IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/es.git.innospense.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/es.git.innospense.com/privkey.pem
   Your cert will expire on 2018-04-24. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"
 - If you like Certbot, please consider supporting our work by:
 
   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

 

Things which might throw you an error

python-urllib3 version

First caveat for CentOS7 is that you need specific version 1.21 for urllib3. I had 1.22 installed via yum which gave me the following error.

ImportError: No module named 'requests.packages.urllib3'

You can see the currently installed version with pip:

pip freeze | grep urllib

To resolve this, first remove the old version it with yum and then add it with pip:

sudo yum remove python-urllib3 
sudo pip install -Iv https://github.com/shazow/urllib3/archive/1.21.1.tar.gz

pyOpenSSL version

Just like urllib3, pyOpenSSL was of an unsupported version.

sudo yum remove pyOpenSSL
sudo pip install pyOpenSSL

Error message stating that the CA can’t be satisfied

After running

certbot --nginx

you get the following error:

Client with the currently selected authenticator does not support any combination of challenges that will satisfy the CA.

Due to legal reasons there currently is no

From the github certbot website:

If you’re serving files for that domain out of a directory on Nginx, you can run the following command:

# Webroot method
$ sudo certbot --authenticator webroot --installer nginx \
  --webroot-path <path to served directory> -d <domain>

If you’re not serving files out of a directory (for instance if you are using proxy_pass), you can temporarily stop your server while you obtain the certificate and restart it after Certbot has obtained the certificate. This would look like:

# Temporary outage method
$ sudo certbot --authenticator standalone --installer nginx \
  -d <domain> --pre-hook "service nginx stop" --post-hook "service nginx start"

 

SonarQube with Postgres on docker-compose

[updated 2022-08-08]

Struggling to get a working environment with SonarQube and PostgreSQL?

Use the following docker-compose file and be up and running in minutes.

It is as ‘bare’ as possible:

  • use of official Docker images for both PostgreSQL and SonarQube
  • no other configuration required
  • use of volumes so you can backup your data

Recommended system specs

  • >= 3GB of RAM
# file: docker-compose.yml

version: "3"

services:
  sonarqube:
    image: sonarqube:9-community
    # platform: linux/amd64  # uncomment this when using Mac M1
    restart: unless-stopped
    environment:
      - SONARQUBE_JDBC_USERNAME=sonar
      - SONARQUBE_JDBC_PASSWORD=v07IGCFCF83Z95NX
      - SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonarqube
    ports:
      - "9000:9000"
      - "9092:9092"
    volumes:
      - sonarqube_conf:/opt/sonarqube/conf
      - sonarqube_data:/opt/sonarqube/data
      - sonarqube_extensions:/opt/sonarqube/extensions
      - sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins

  db:
    image: postgres:14.4
    # platform: linux/amd64  # uncomment this when using Mac M1
    restart: unless-stopped
    environment:
      - POSTGRES_USER=sonar
      - POSTGRES_PASSWORD=v07IGCFCF83Z95NX
      - POSTGRES_DB=sonarqube
    volumes:
      - sonarqube_db:/var/lib/postgresql
      # This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
      - postgresql_data:/var/lib/postgresql/data

volumes:
  postgresql_data:
  sonarqube_bundled-plugins:
  sonarqube_conf:
  sonarqube_data:
  sonarqube_db:
  sonarqube_extensions:

Start this stack with the following command:

# start the containers
docker-compose up -d

You can reach your SonarQube instance at http://localhost:9000

Use the default credentials admin/admin to login.

Useful links:

Change mysql_ to mysqli_ functions

In the process of upgrading PHP5.3 code I had to change all deprecated mysql_* functions to their mysqli_* counterparts.

For a lot of functions the signature stayed the same.

But mysqli_query and mysqli_connect have differences. So you can’t just find and replace them.

Instead of doing this manually, I wanted to find and replace recursively while changing the order of the arguments.

In vim:

# change mysql_query(param1, param2) to: 
# mysqli_query(param2, param1)
:%s/mysql_query(\(.\{-}\),\(.\{-}\))/mysqli_query(\2, \1)/g

Using sed:

# on linux

# mysql_query(param1, param2) to 
# mysqli_query(param2, param1)
sed -i 's|mysql_query(\(.*\),\(.*\))|mysqli_query(\2, \1)|g' devices.php

# on mac (otherwise you get the 'invalid command mode' when 
# you run the sed command)

# mysql_query(param1, param2) to: 
# mysqli_query(param2, param1)
sed -i '' -e 's|mysql_query(\([^,]*\),\([^)]*\))|mysqli_query(\2, \1)|g' devices.php

Recursively changing all files:

# in all files under current directory:
# mysql_query(param1, param2) to: 
# mysqli_query(param2, param1)
fgrep -rl mysql_query . | while read file; do
  sed -i '' -e 's|mysql_query(\([^,]*\),\([^)]*\))|mysqli_query(\2, \1)|g' $file
done

Note that sed cannot do non greedy matching.

That’s why we’re searching for anything but the separator until the separator like this:

# non greedy matching with sed
\([^,]*\),

It basically states: get everything except for the comma until you get a comma (which is the first one to appear).

Disable xdebug for one run

This script disables xdebug for one run. No more error-messages like:

$ composer update
You are running composer with xdebug enabled. This has a major impact on runtime performance. See https://getcomposer.org/xdebug

and:

$ php-cs-fixer fix --dry-run .
You are running PHP CS Fixer with xdebug enabled. This has a major impact on runtime performance.
If you need help while solving warnings, ask at https://gitter.im/PHP-CS-Fixer, we will help you!

This is what you’ll get

We’ll create a script which will:

  • disable xdebug
  • run your command
  • enable xdebug

the script we’ll name php-no-xdebug (or whatever you like)

With Xdebug (note the last line)

$ php --version
PHP 7.1.10 (cli) (built: Oct  6 2017 01:08:19) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies
    with Xdebug v2.5.5, Copyright (c) 2002-2017, by Derick Rethans

Without Xdebug (note the missing last line)

$ php-no-xdebug --version
PHP 7.1.10 (cli) (built: Oct  6 2017 01:08:19) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies

The script php-no-xdebug

Create the script /usr/local/bin/php-no-xdebug with the following contents.

# file /usr/local/bin/php-no-xdebug
#!/bin/bash

php=$(which php)

# get the xdebug config
xdebugConfig=$(php -i | grep xdebug | while read line; do echo $line; exit; done)

# no xdebug? Nothing to do!
if [ "$xdebugConfig" == "" ]; then
    $php "$@"
    exit
fi

# get the configfile (which should be the first value)
# so strip off everything after the first space of the xdebug-config
xdebugConfigFile=$(php -i | grep xdebug | while read line; do echo $line; exit; done)

# test whether we got it right
if [ ! -f "$xdebugConfigFile" ]; then
    echo "No XDebug configfile found!"
    exit 1
fi

# disable xdebug by renaming the relevant .ini file
mv ${xdebugConfigFile}{,.temporarily-disabled}

# dissect the argument to extract the first one (which should be a script or an application in $PATH) from the rest
index=0
for arg in $(echo $@ | tr ' ' "\n")
do
    if [ "$index" == "0" ]; then
        firstArg=$arg
    else
      restArg="$restArg $arg"
    fi

   ((index++))
done

# check whether the command to be executed is a local PHP file or something in the $PATH like composer or php-cs-fixer
fullPath="$(which $firstArg)"
if [ "$fullPath" == "" ]; then
    # check whether it's a local file
    if [ ! -f  $firstArg ]; then
        echo "Could not find $firstArg. No such file or directory"
        exit 1
    else
        # just run the commands
        $php $@
    fi
else
    # run the command with the fullpath followed by the rest of the arguments provided
    $php $fullPath $restArg
fi

# execute the command
$php "$@"

# re-enable xdebug
mv ${xdebugConfigFile}{.temporarily-disabled,}

# test whether the conf file is restored correctly
if [ ! -f "$xdebugConfigFile" ]; then
    echo "Something went wrong with restoring the configfile for xdebug!"
    exit 1
fi

and make it executable

$ chmod +x /usr/local/bin/php-no-xdebug

That’s it! Run it like this:

$ php-no-xdebug composer update