You might want to order your users by ‘days until their birthday’.
People having their birthday come first, people with no birthday registered (NULL values), come last.
// in User model
public static function getByBirthday()
{
return User::query()
->select('user.*')
->selectRaw(
'365.25 -
(
case
WHEN TIMESTAMPDIFF(day, birthday, CURDATE()) = 0 THEN 364.25
WHEN birthday IS NULL THEN 0
ELSE TIMESTAMPDIFF(day, birthday, CURDATE())
end
mod 365.25
) AS days_till_birthday'
)
->orderBy('days_till_birthday');
}
// use it in your code like
$usersByBirthday = User::getByBirthday()->get();
When you want to assert that a variable has a specific value, you can use assert.
Note
1. that you return the value in the execute script
part
2. that you don’t need the curly braces in the assert
value.
For me, it turned out the stateful
property in config/sanctum.php
was not filled correctly.
After setting it to the default as shown below, it started working.
// file config/sanctum.php
...
'stateful' => explode(',', env('SANCTUM_STATEFUL_DOMAINS', sprintf(
'%s%s',
'localhost,localhost:3000,127.0.0.1,127.0.0.1:8000,::1',
Sanctum::currentApplicationUrlWithPort()
))),
...
Did this not fix your problem?
Check this post which might help: https://stackoverflow.com/a/69858100
When you want your tests to be able to run whenever you want, you should use values which are random.
In Postman, click on the name of Collection and then open the ‘Pre-request Script’ tab.
There, add the following:
// get a random number between a minimum and a maximum
// gives you current datetime with milliseconds like 2022810_171012_174
postman.setGlobalVariable("getCurrentDate", () => {
const date=new Date();
return String(date.getFullYear())
+ String(date.getMonth()+1)
+ String(date.getDate())
+ '_'
+ String(date.getHours() < 10 ? "0"+date.getHours() : date.getHours())
+ String(date.getMinutes() < 10 ? "0"+date.getMinutes() : date.getMinutes())
+ String(date.getSeconds() < 10 ? "0" + date.getSeconds() : date.getSeconds())
+ '_'
+ String(date.getMilliseconds())
})
You can now use this function in your tests. This enables you to make your strings (like emailaddresses) random by adding the current datetime to it.
To use it, open your test, click on the ‘Pre-request Script’ tab and add the following.
var currentDate = eval(pm.globals.get("getCurrentDate"))();
var randomEmail = `postman-${currentDate}@pauledenburg.com`;
pm.environment.set("randomEmail", randomEmail);
You can now use the generated value in the body of your POST-request by referencing it as {{randomEmail}}
First of all: do check the Gulp documentation on this: https://gulpjs.com/docs/en/getting-started/async-completion#using-async-await
I had the following gulpfile.js
:
# file gulpfile.js
function build() {
return series(
clean,
parallel(
images,
tracker,
fonts
),
clean_busters
);
}
exports.build = build;
When running gulp build
I got the following errors:
$ gulp build
[11:17:33] Using gulpfile ./gulpfile.js
[11:17:46] The following tasks did not complete: build
[11:17:46] Did you forget to signal async completion?
I fixed it by making the build()
function async: async build()
.
Then my gulpfile.js
looked like the following (note the extra parentheses at the end!)
# file gulpfile.js
async function build() {
return series(
clean,
parallel(
images,
tracker,
fonts
),
clean_busters
)();
}
exports.build = build;
This shows you how to enable logging so you can write stuff like $app->log->debug('this will show up in the error_log');
.
<?php --- snip %< --- $app = new \Slim\Slim(array( 'log.enabled' => true, 'log.level' => \Slim\Log::DEBUG )); $app->log->debug('this will show up in your error-log'); --- >% /snip ---
[opt-in]
I spend a lot of time figuring out why I kept getting a ‘404 Not Found’ when I wanted to renew my SSL Certificate with certbot.
Long story short: invalid ipv6 DNS Mapping.
I got it working by removing the ipv6 DNS entry. I’ll be fixing it in a proper way when there is more time available.
But there were other gotcha’s as well:
[opt-in]
Want to run a CLI command on Docker while debugging it with XDebug in an IDE like PHPStorm?
Then you need to have your environment in order.
First, create the path mappings in PHPStorm by creating a server in Settings / Preferences | Languages & Frameworks | PHP | Servers.
Continue readingToday I wanted to add a package-job to my Gitlab CI as instructed in this nice Gitlab tutorial.
I created the tar-file but when it came to uploading it failed with Request entity too large
.
(...) ERROR: Uploading artifacts to coordinator... too large archive id=243 responseStatus=413 Request Entity Too Large status=413 Request Entity Too Large token=JYszbA9F FATAL: Too large ERROR: Job failed: exit status 1
It took me some digging, but this is how I fixed this (note, the Nginx proxy was the one giving me a hard time).
maximum artifacts size
In your gitlab, go to Settings > Continuous Integration and Deployment > Maximum artifacts size (MB) and set it to the desired value. The default is 100MB.
In the gitlab.rb file, mine at /etc/gitlab/gitlab.rb
, set or uncomment the following line.
nginx['client_max_body_size'] = '250m'
And reconfigure gitlab to get this to work.
gitlab-ctl reconfigure
I run gitlab on docker containers. On the server, I run nginx as a proxy to redirect requests for gitlab to these containers.
I failed to update the proxy configuration to allow the POST-ing of data to the containers.
As I use nginx, this is the line I added. For Apache, just google and you’ll find your answer.
client_max_body_size 0;
This will set no limits on clients sending data.
For reference, this is my whole nginx vhost file.
server { listen 80; server_name git.pauledenburg.com; client_max_body_size 0; location / { proxy_pass http://localhost:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
Don’t forget to reload nginx.
$ sudo nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful $ sudo service nginx reload
[updated 2022-08-08]
Struggling to get a working environment with SonarQube and PostgreSQL?
Use the following docker-compose file and be up and running in minutes.
It is as ‘bare’ as possible:
Recommended system specs
# file: docker-compose.yml
version: "3"
services:
sonarqube:
image: sonarqube:9-community
# platform: linux/amd64 # uncomment this when using Mac M1
restart: unless-stopped
environment:
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=v07IGCFCF83Z95NX
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonarqube
ports:
- "9000:9000"
- "9092:9092"
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres:14.4
# platform: linux/amd64 # uncomment this when using Mac M1
restart: unless-stopped
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=v07IGCFCF83Z95NX
- POSTGRES_DB=sonarqube
volumes:
- sonarqube_db:/var/lib/postgresql
# This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
- postgresql_data:/var/lib/postgresql/data
volumes:
postgresql_data:
sonarqube_bundled-plugins:
sonarqube_conf:
sonarqube_data:
sonarqube_db:
sonarqube_extensions:
Start this stack with the following command:
# start the containers
docker-compose up -d
You can reach your SonarQube instance at http://localhost:9000
Use the default credentials admin/admin to login.
Useful links: