I use variables all the time. And to be able to re-use a test over and over again, I need random email addresses whenever I fill in forms.
For this I define a variable with the current date and time and then a variable which will hold the email address which uses the current date and time.
My random email address will look like: selenium-20220318_122803@pauledenburg.com
Just store the following as 1 string into the ‘Target’ part of your command.
const date=new Date(); return String(date.getFullYear()) + String(date.getMonth()+1) + String(date.getDate()) + '_' + String(date.getHours() < 10 ? "0"+date.getHours() : date.getHours()) + String(date.getMinutes() < 10 ? "0"+date.getMinutes() : date.getMinutes()) + String(date.getSeconds() < 10 ? "0" + date.getSeconds() : date.getSeconds())
It will look like this in Selenium IDE:
Now you can use this to create your email address which is unique every time you run your test:
And use it when you want to fill a form.
In your .json
definition file:
{
"swagger": "2.0".
...
"securityDefinitions": {
"bearerAuth": {
"type": "apiKey",
"in": "header",
"name": "Authorization",
}
},
...
"paths": {
"get": {
"/path": {
"security": [
{"bearerAuth": []}
],
...
}
}
}
official documentation is here: https://swagger.io/docs/specification/authentication/bearer-authentication/
git diff --word-diff=color dump1.sql dump2.sql | less -R
Answer taken from here: https://stackoverflow.com/a/57164008
The quick way:
git branch --merged master | grep -v '^[ *]*master$' | xargs git branch -d
git remote prune origin
Use the following to have the branches displayed before you’re asked to delete them.
branches=$(git branch --merged master | grep -v '^[ *]*master$'); \
printf '\n\nBranches to be removed:\n---\n'; \
echo ${branches} | xargs -n1; \
printf '---\n\nRemove the branches above? [Ny] ' \
&& read shouldDelete \
&& [[ "${shouldDelete}" =~ [yY] ]] \
&& echo $branches | xargs git branch -d \
|| echo 'aborted'
In your migration, add this as an option to the table:
$this->table(
'specific_costs_users',
['collation'=>'utf8mb4_unicode_ci']
)
->addColumn(...)
I found the answer here on StackOverflow
I noticed today that my server was very slow. Looking at the running processes, I noted that process wanwakuang
and 000000
were going crazy.
Searching wanwakuang
on Google did not yield much results, but this article on HackerNews was very helpful: https://translate.google.com/translate?sl=auto&tl=en&u=http://hackernews.cc/archives/34789
Appearently wanwakuang is a mining process.
However, I could not find the binary on my system. My server is only running Docker containers, so probably one of the containers was at fault.
To find the docker container with the exploit, I executed the command:
$ find /var/lib/docker -type f -name wanwakuang /var/lib/docker/overlay2/1752e86653539d82b50cf24c3d3f69b203fe059ca1650447016ca69033d468bf/diff/root/.configrc/a/wanwakuang /var/lib/docker/overlay2/1752e86653539d82b50cf24c3d3f69b203fe059ca1650447016ca69033d468bf/diff/tmp/.W10-unix/.rsync/a/wanwakuang /var/lib/docker/overlay2/1752e86653539d82b50cf24c3d3f69b203fe059ca1650447016ca69033d468bf/merged/root/.configrc/a/wanwakuang /var/lib/docker/overlay2/1752e86653539d82b50cf24c3d3f69b203fe059ca1650447016ca69033d468bf/merged/tmp/.W10-unix/.rsync/a/wanwakuang
To find out which Docker container was attached to this overlay, I issued this command I found on stackoverflow:
$ docker inspect $(docker ps -qa) \ | jq -r 'map([.Name, .GraphDriver.Data.MergedDir]) \ | .[] | "(.[0])\t(.[1])"' \ | grep '1752e86653539d82b50cf24c3d3f69b203fe059ca1650447016ca69033d468bf'
Knowing the name I could terminate the container. It was being used for SSH and could be removed.
I had the issue of cronjobs not working (correctly) on my Docker instances.
This is what I did to fix it:
chmod 0600 /etc/cron.d/cronjob
chown root /etc/cron.d/cronjob
When it failed, I could not find the logs of why it failed.
In order to see the output of the failed cronjobs, I installed postfix
(because the output of cronjobs is being mailed) and I installed rsyslog
apt-get update; apt-get install -y postfix; mkfifo /var/spool/postfix/public/pickup; service postfix restart
rsyslog
:apt-get update; apt-get install -y rsyslog; rsyslogd &
Now, whenever a cronjob failed, I could find output in either two locations:
/var/log/syslog
/var/mail/root
You need to configure CakePHP to create the cachefiles with the right permissions.
You do this with setting 'mask' => 0666
in file app.php
for the Cache
setting:
// file src/config/app.php (AND app.default.php!)
...
/**
* Configure the cache adapters.
*/
'Cache' => [
'default' => [
'className' => 'Cake\Cache\Engine\FileEngine',
'path' => CACHE,
'url' => env('CACHE_DEFAULT_URL', null),
'mask' => 0666,
],
...
'_cake_core_' => [
'mask' => 0666,
...
],
'_cake_model_' => [
'mask' => 0666,
...
],
'_cake_routes_' => [
'mask' => 0666,
...
],
...
Most probably:
To fix the above:
service cron start
chmod 0600 /etc/cron.d/<your file>
chown root /etc/cron.d/<your file>
If that does not work, install rsyslog
on your container and debug:
apt-get update; apt-get install -y rsyslog
rsyslogd
Now keep track of your /var/log/syslog
file. It will log all issues it receives from cron.
I found the answer here: https://stackoverflow.com/a/33734915
I fixed this by adding <base href="/" />
in the <head>
of file /public/index.html
:
<!DOCTYPE html>
<html lang="en">
<head>
<base href="/" /
....