import old posts
This commit is contained in:
commit
c7a6e7f57e
6 changed files with 773 additions and 0 deletions
43
posts/000/001/article.md
Normal file
43
posts/000/001/article.md
Normal file
|
@ -0,0 +1,43 @@
|
||||||
|
---
|
||||||
|
title: Setup Nginx for Mediawiki
|
||||||
|
date: "2010-09-15"
|
||||||
|
tags:
|
||||||
|
- Nginx
|
||||||
|
- Mediawiki
|
||||||
|
slug: setup-nginx-for-mediawiki
|
||||||
|
---
|
||||||
|
|
||||||
|
Two weeks ago, I migrated a server from Apache/mod_php to nginx/php-fpm. Only
|
||||||
|
today did i succeed to remove all side effects. The latest one:
|
||||||
|
|
||||||
|
Static files must not go through php-fpm, but a simple test on extensions
|
||||||
|
is ineffective, as url like `http://server/File:name_of_the_file.png`
|
||||||
|
must be processed by PHP.
|
||||||
|
|
||||||
|
Here is my final setup, that corrects all the errors I encountered:
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
server {
|
||||||
|
listen 80;
|
||||||
|
server_name server_name;
|
||||||
|
index index.php;
|
||||||
|
root /path/to/www/;
|
||||||
|
|
||||||
|
# Serve static files with a far future expiration
|
||||||
|
# date for browser caches
|
||||||
|
location ^~ /images/ {
|
||||||
|
expires 1y;
|
||||||
|
}
|
||||||
|
location ^~ /skins/ {
|
||||||
|
expires 1y;
|
||||||
|
}
|
||||||
|
|
||||||
|
# Pass the request to php-cgi
|
||||||
|
location / {
|
||||||
|
fastcgi_pass 127.0.0.1:9000;
|
||||||
|
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
|
||||||
|
fastcgi_index index.php;
|
||||||
|
include fastcgi_params;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
29
posts/000/002/article.md
Normal file
29
posts/000/002/article.md
Normal file
|
@ -0,0 +1,29 @@
|
||||||
|
---
|
||||||
|
title: Build the latest PgPool-II on Debian Etch
|
||||||
|
date: "2010-12-14"
|
||||||
|
tags: [Debian, PgPool-II]
|
||||||
|
slug: build-pgpool-on-debian
|
||||||
|
---
|
||||||
|
|
||||||
|
After having build PgPool-II on Red Hat Enterprise Linux 5.5 without any
|
||||||
|
problem, I tried to build it on a fresh Debian Etch. The catch is that I did
|
||||||
|
not want to install PostgreSQL 9.0, but just extract it from the binary
|
||||||
|
packages provided by Entreprisedb (with option `--extract-only 1`).
|
||||||
|
|
||||||
|
Whatever options I passed to `./configure`, it resulted in the same error:
|
||||||
|
|
||||||
|
{{< highlight text >}}
|
||||||
|
checking for PQexecPrepared in -lpq... no
|
||||||
|
configure: error: libpq is not installed or libpq is old
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
Here is the answer: the binary package contains the libpq with the name
|
||||||
|
`libcrypto.so.0.9.8` (the RHEL name) when pgpool is looking `libcrypto.so.6`
|
||||||
|
on Debian. The same applies to `libssl`. So a simple
|
||||||
|
|
||||||
|
{{< highlight bash >}}
|
||||||
|
ln -s libcrypto.so.0.9.8 libcrypto.so.0.9.8
|
||||||
|
ln -s libssl.so.0.9.8 libssl.so.6
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
before your `./configure` will solve it!
|
87
posts/000/003/article.md
Normal file
87
posts/000/003/article.md
Normal file
|
@ -0,0 +1,87 @@
|
||||||
|
---
|
||||||
|
slug: aptana-eclipse-and-xulrunner
|
||||||
|
title: Aptana Studio/Eclipse and Xulrunner
|
||||||
|
tags: [Aptana Studio, Eclipse, Xulrunner, Arch Linux]
|
||||||
|
date: "2011-12-16"
|
||||||
|
---
|
||||||
|
|
||||||
|
Since a few months, I encountered an annoying error in Aptana Studio and
|
||||||
|
Eclipse 3.7 (the autonomous packages, not the packages from the repositories)
|
||||||
|
whenever I tried to do a git or hg action.
|
||||||
|
|
||||||
|
I could live without until now, but today, it was really bothering me.
|
||||||
|
|
||||||
|
The error is:
|
||||||
|
|
||||||
|
{{< highlight text >}}
|
||||||
|
Unhandled event loop exception
|
||||||
|
No more handles [Unknown Mozilla path (MOZILLA_FIVE_HOME not set)]
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
The log file showed the following backtrace:
|
||||||
|
|
||||||
|
{{< highlight text >}}
|
||||||
|
!ENTRY org.eclipse.ui 4 0 2011-12-16 17:17:30.825
|
||||||
|
!MESSAGE Unhandled event loop exception
|
||||||
|
!STACK 0
|
||||||
|
org.eclipse.swt.SWTError: No more handles [Unknown Mozilla path (MOZILLA_FIVE_HOME not set)]
|
||||||
|
at org.eclipse.swt.SWT.error(SWT.java:4109)
|
||||||
|
at org.eclipse.swt.browser.Mozilla.initMozilla(Mozilla.java:1739)
|
||||||
|
at org.eclipse.swt.browser.Mozilla.create(Mozilla.java:656)
|
||||||
|
at org.eclipse.swt.browser.Browser.<init>(Browser.java:119)
|
||||||
|
at com.aptana.git.ui.internal.actions.CommitDialog.createDiffArea(CommitDialog.java:237)
|
||||||
|
at com.aptana.git.ui.internal.actions.CommitDialog.createDialogArea(CommitDialog.java:158)
|
||||||
|
|
||||||
|
[...]
|
||||||
|
|
||||||
|
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:620)
|
||||||
|
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:575)
|
||||||
|
at org.eclipse.equinox.launcher.Main.run(Main.java:1408)
|
||||||
|
at org.eclipse.equinox.launcher.Main.main(Main.java:1384)
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
To make it short, after having read [a](https://bugs.archlinux.org/task/5149)
|
||||||
|
[lot](https://bugs.archlinux.org/task/27130)
|
||||||
|
[of](https://github.com/eclipse-color-theme/eclipse-color-theme/issues/50)
|
||||||
|
[posts](https://bbs.archlinux.org/viewtopic.php?id=129982)
|
||||||
|
[about](http://forums.gentoo.org/viewtopic-t-827838-view-previous.html?sid=546c5717e2167c45d9b02f9f20ab36f4)
|
||||||
|
[this](<http://stackoverflow.com/questions/1017945/problem-with-aptana-studio-xulrunner-8-1)
|
||||||
|
[problem](http://www.eclipse.org/swt/faq.php#gtk64), it seemed it was enough
|
||||||
|
to give the path to Xulrunner to Aptana.
|
||||||
|
|
||||||
|
On my Arch Linux, it was
|
||||||
|
|
||||||
|
{{< highlight bash >}}
|
||||||
|
export MOZILLA_FIVE_HOME=/usr/lib/xulrunner-8.0
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
Trying to start Aptana Studio, I had a new error. It simply stated
|
||||||
|
|
||||||
|
{{< highlight text >}}
|
||||||
|
XPCOM error -2147467261
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
The solution is that Aptana Studio cannot work with the version of Xulrunner
|
||||||
|
in Arch LinuX repositories because it is too recent.
|
||||||
|
|
||||||
|
To solve this problem, I had to install xulrunner 1.9.2 from AUR:
|
||||||
|
|
||||||
|
{{< highlight bash >}}
|
||||||
|
yaourt -S xulrunner192
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
The PKGBUILD was broken this morning and ended in a 404 Error when fetching
|
||||||
|
sources. If you have the same problem, `here is an updated PKGBUILD
|
||||||
|
<https://gist.github.com/1486851>`__
|
||||||
|
|
||||||
|
Finally, I put
|
||||||
|
|
||||||
|
{{< highlight bash >}}
|
||||||
|
-Dorg.eclipse.swt.browser.XULRunnerPath=/usr/lib/xulrunner-1.9.2
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
at the end of the `AptanaStudio3.ini` file in the Aptana Studio folder. For
|
||||||
|
the package in the Arch Linux repositories, this file is
|
||||||
|
`/usr/share/aptana/AptanaStudio3.ini`.
|
||||||
|
|
||||||
|
|
176
posts/000/004/article.md
Normal file
176
posts/000/004/article.md
Normal file
|
@ -0,0 +1,176 @@
|
||||||
|
---
|
||||||
|
tags: [Python, Buzhug, Database, Locks]
|
||||||
|
slug: locking-buzhug
|
||||||
|
title: Locking Buzhug
|
||||||
|
date: "2012-02-07"
|
||||||
|
---
|
||||||
|
|
||||||
|
I have recently decided to work with [Buzhug] on a project. As far as I can tell,
|
||||||
|
it has proven efficient, fast, easy to use and to maintain. However, I ran into
|
||||||
|
a few gotchas.
|
||||||
|
|
||||||
|
[Buzhug]: http://buzhug.sourceforge.net
|
||||||
|
|
||||||
|
Simple solutions are often the best
|
||||||
|
===================================
|
||||||
|
|
||||||
|
I came to use Buzhug for the following requirements:
|
||||||
|
|
||||||
|
- I needed a single table
|
||||||
|
- I did not want to add additional dependencies to the project
|
||||||
|
- The size of the table will average 5K entries (without having more than
|
||||||
|
10k entries in peaks)
|
||||||
|
|
||||||
|
And an additional (personal) one:
|
||||||
|
|
||||||
|
- I did not want to bother with SQL. Really not. no way!
|
||||||
|
|
||||||
|
That left me one option: pure-python embedded database.
|
||||||
|
|
||||||
|
After having considered a few libraries, I have been seduced by the way Buzhug
|
||||||
|
interface is close to manipulating python objects. And the benchmarks seemed
|
||||||
|
to show that it is performant enough for this project.
|
||||||
|
|
||||||
|
After a quick prototyping (1 day), the choice was done.
|
||||||
|
|
||||||
|
Then came a few weeks of development and the first stress tests...
|
||||||
|
|
||||||
|
|
||||||
|
And the real world came back fast
|
||||||
|
=================================
|
||||||
|
|
||||||
|
|
||||||
|
A few times a day, the application backed by this database is intensely used:
|
||||||
|
|
||||||
|
- It can be run up to 50 times simultaneously in separate python process
|
||||||
|
- Each run makes a read and a write/delete operation
|
||||||
|
|
||||||
|
This causes a race condition on the files used to store data, and concurent
|
||||||
|
writes corrupts database.
|
||||||
|
|
||||||
|
Using `buzhug.TS_Base` instead of `buzhug.Base` did not solve anything,
|
||||||
|
as the problem is not thread, but processes. What I need is a system-wide
|
||||||
|
cross-process lock.
|
||||||
|
|
||||||
|
|
||||||
|
Here is the answer
|
||||||
|
==================
|
||||||
|
|
||||||
|
First step was to find how to implement a cross-process, system-wide lock.
|
||||||
|
As it only has to work on Linux, the
|
||||||
|
[Lock class given by Chris from
|
||||||
|
Vmfarms](http://blog.vmfarms.com/2011/03/cross-process-locking-and.html) fits
|
||||||
|
perfectly. Here is a version slightly modified to make it a context manager :
|
||||||
|
|
||||||
|
|
||||||
|
{{< highlight python >}}
|
||||||
|
import fcntl
|
||||||
|
|
||||||
|
class PsLock:
|
||||||
|
"""
|
||||||
|
Taken from:
|
||||||
|
http://blog.vmfarms.com/2011/03/cross-process-locking-and.html
|
||||||
|
"""
|
||||||
|
def __init__(self, filename):
|
||||||
|
self.filename = filename
|
||||||
|
self.handle = open(filename, 'w')
|
||||||
|
|
||||||
|
# Bitwise OR fcntl.LOCK_NB if you need a non-blocking lock
|
||||||
|
def acquire(self):
|
||||||
|
fcntl.flock(self.handle, fcntl.LOCK_EX)
|
||||||
|
|
||||||
|
def release(self):
|
||||||
|
fcntl.flock(self.handle, fcntl.LOCK_UN)
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
self.handle.close()
|
||||||
|
|
||||||
|
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||||
|
if exc_type is None:
|
||||||
|
pass
|
||||||
|
self.release()
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
self.acquire()
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
The second step is to define a new class that inheritates from `buzhug.Base`
|
||||||
|
that uses `PsLock` (inspired by `TS_Base`):
|
||||||
|
|
||||||
|
|
||||||
|
{{< highlight python >}}
|
||||||
|
import buzhug
|
||||||
|
|
||||||
|
_lock = PsLock("/tmp/buzhug.lck")
|
||||||
|
|
||||||
|
class PS_Base(buzhug.Base):
|
||||||
|
|
||||||
|
def create(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.create(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def open(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.open(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def close(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.close(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def destroy(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.destroy(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def set_default(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.set_default(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def insert(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.insert(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def update(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.update(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def delete(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.delete(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def cleanup(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.cleanup(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def commit(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.commit(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def add_field(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.add_field(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
|
||||||
|
def drop_field(self,*args,**kw):
|
||||||
|
with _lock:
|
||||||
|
res = buzhug.Base.drop_field(self,*args,**kw)
|
||||||
|
return res
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
Now I just use
|
||||||
|
|
||||||
|
{{< highlight python >}}
|
||||||
|
database = PS_Base( ... )
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
|
||||||
|
And all the errors have vanished.
|
32
posts/000/005/article.md
Normal file
32
posts/000/005/article.md
Normal file
|
@ -0,0 +1,32 @@
|
||||||
|
---
|
||||||
|
tags: [Sublime Text 2]
|
||||||
|
slug: automatically-open-sublime-text-projects-in-a-directory
|
||||||
|
title: Automatically open Sublime Text projects in a directory
|
||||||
|
date: 2013-05-15
|
||||||
|
---
|
||||||
|
|
||||||
|
I usually start Sublime Text 2 from the command line to work, depending
|
||||||
|
on the case, on the content of a directory or on a project (materialized
|
||||||
|
with a `*.sublime-project` file).
|
||||||
|
|
||||||
|
It ends up with one of the following commands :
|
||||||
|
|
||||||
|
- `subl .`
|
||||||
|
- `subl my-project.sublime-project`
|
||||||
|
|
||||||
|
Here is the snippet I added to my .bashrc file to have the `subl`
|
||||||
|
command automatically "guess" what I want. It does the following:
|
||||||
|
|
||||||
|
- If a path is given (subl "my/file.txt"), it opens the file.
|
||||||
|
- If nothing is given and a .sublime-project file exists in the current
|
||||||
|
directory, it opens it
|
||||||
|
- If nothing is given and no .sublime-project file has been found, it
|
||||||
|
opens the folder.
|
||||||
|
|
||||||
|
{{< highlight bash >}}
|
||||||
|
function project_aware_subl {
|
||||||
|
project_file=$(ls *.sublime-project 2>/dev/null | head -n 1)
|
||||||
|
subl ${*:-${project_file:-.}}
|
||||||
|
}
|
||||||
|
alias subl="project_aware_subl"
|
||||||
|
{{< /highlight >}}
|
406
posts/000/006/article.md
Normal file
406
posts/000/006/article.md
Normal file
|
@ -0,0 +1,406 @@
|
||||||
|
---
|
||||||
|
title: Discourse without Docker
|
||||||
|
slug: discourse-without-docker
|
||||||
|
date: 2016-06-27
|
||||||
|
tags: [discourse, docker]
|
||||||
|
---
|
||||||
|
|
||||||
|
{{< warning >}}
|
||||||
|
The only official method is [with docker]. You might not be able
|
||||||
|
to get support from Discourse by following this method.
|
||||||
|
|
||||||
|
[with docker]: http://blog.discourse.org/2014/04/install-discourse-in-under-30-minutes/
|
||||||
|
{{< /warning >}}
|
||||||
|
|
||||||
|
|
||||||
|
The team behind [Discourse] has chosen to only release Docker images of
|
||||||
|
their software. The rational behind it is: it is easier to only support
|
||||||
|
a single setup. I will not discuss that. It is their choice.
|
||||||
|
|
||||||
|
However, I don't like to use docker to deploy apps in prodution. I even
|
||||||
|
hate it. If you are like me, here are the steps I used to install it
|
||||||
|
and to set it up.
|
||||||
|
|
||||||
|
I use Debian servers in production, so the steps below are all debian
|
||||||
|
oriented.
|
||||||
|
|
||||||
|
{{< note >}}
|
||||||
|
This is not intended as a comprehensive guide. A lot of commands and
|
||||||
|
configuration files might need to be adapted to your environment.
|
||||||
|
|
||||||
|
It does not even tries to talk about important topics in production such as
|
||||||
|
security. This is left as an exercise to the reader.
|
||||||
|
{{< /note >}}
|
||||||
|
|
||||||
|
|
||||||
|
# Installation
|
||||||
|
|
||||||
|
After all, Discourse is a rails application. It can be installed like
|
||||||
|
any other rails application:
|
||||||
|
|
||||||
|
First things first: Discourse uses Redis and PostgreSQL (or at least,
|
||||||
|
I prefer to use Postgres). I also use Nginx as a proxy to the
|
||||||
|
application. Install the external dependencies:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Add the reposirory for Redis
|
||||||
|
echo "deb http://packages.dotdeb.org jessie all" > /etc/apt/sources.list.d/dotdeb.list
|
||||||
|
wget https://www.dotdeb.org/dotdeb.gpg -O - | apt-key add -
|
||||||
|
|
||||||
|
# Add the repository for PostgreSQL:
|
||||||
|
echo "deb http://apt.postgresql.org/pub/repos/apt/ jessie-pgdg main" > /etc/apt/sources.list.d/postgresql.list
|
||||||
|
wget -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
|
||||||
|
|
||||||
|
apt-get update
|
||||||
|
apt-get install postgresql-9.5 redis-server nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, create a database for the application. Enter postgres command
|
||||||
|
line interface:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
su - postgres -c psql
|
||||||
|
```
|
||||||
|
|
||||||
|
and enter the following commands:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
CREATE DATABASE discourse;
|
||||||
|
CREATE USER discourse;
|
||||||
|
ALTER USER discourse WITH ENCRYPTED PASSWORD 'password';
|
||||||
|
ALTER DATABASE discourse OWNER TO discourse;
|
||||||
|
\connect discourse
|
||||||
|
CREATE EXTENSION hstore;
|
||||||
|
CREATE EXTENSION pg_trgm;
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, you can checkout the Discourse code:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
git clone https://github.com/discourse/discourse.git /path/to/discourse
|
||||||
|
|
||||||
|
# Optionally, checkout a specific tag
|
||||||
|
cd /path/to/discourse
|
||||||
|
git checkout v1.5.3
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, go in the application top directory, and set it up as any rails
|
||||||
|
application:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Optionally setup rvm with ruby 1.9.3 minimum (I use 2.3.0)
|
||||||
|
rvm install 2.3.0
|
||||||
|
rvm use 2.3.0
|
||||||
|
|
||||||
|
# install dependencies
|
||||||
|
cd /path/to/discourse
|
||||||
|
RAILS_ENV bundle install
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
It's time to configure the application.
|
||||||
|
|
||||||
|
Here, Discourse has a little particularity: The production
|
||||||
|
configuration is located in the file `./config/discourse.conf`.
|
||||||
|
|
||||||
|
Create this file :
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cp config/discourse_defaults.conf config/discourse.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
And edit it with your configuration. The main areas of interest are
|
||||||
|
configuration for the database and for the email server:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# host address for db server
|
||||||
|
# This is set to blank so it tries to use sockets first
|
||||||
|
db_host = localhost
|
||||||
|
|
||||||
|
# port running db server, no need to set it
|
||||||
|
db_port = 5432
|
||||||
|
|
||||||
|
# database name running discourse
|
||||||
|
db_name = discourse
|
||||||
|
|
||||||
|
# username accessing database
|
||||||
|
db_username = discourse
|
||||||
|
|
||||||
|
# password used to access the db
|
||||||
|
db_password = password
|
||||||
|
```
|
||||||
|
|
||||||
|
and for the SMTP server (in this example, we use Gmail):
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# address of smtp server used to send emails
|
||||||
|
smtp_address = smtp.gmail.com
|
||||||
|
|
||||||
|
# port of smtp server used to send emails
|
||||||
|
smtp_port = 587
|
||||||
|
|
||||||
|
# domain passed to smtp server
|
||||||
|
smtp_domain = gmail.com
|
||||||
|
|
||||||
|
# username for smtp server
|
||||||
|
smtp_user_name = your-address@gmail.com
|
||||||
|
|
||||||
|
# password for smtp server
|
||||||
|
smtp_password = password
|
||||||
|
|
||||||
|
# smtp authentication mechanism
|
||||||
|
smtp_authentication = plain
|
||||||
|
|
||||||
|
# enable TLS encryption for smtp connections
|
||||||
|
smtp_enable_start_tls = true
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, we can prepare discourse for production:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
RAILS_ENV=production bundle exec rake db:migrate
|
||||||
|
RAILS_ENV=production bundle exec rake assets:precompile
|
||||||
|
```
|
||||||
|
|
||||||
|
It is time to start the application. I usually use Puma to deploy
|
||||||
|
Rails app.
|
||||||
|
|
||||||
|
Create the file `config/puma.rb` in discourse directory. Following
|
||||||
|
content should be enough (for more info, see
|
||||||
|
[Puma's documentation]):
|
||||||
|
|
||||||
|
```ruby
|
||||||
|
#!/usr/bin/env puma
|
||||||
|
|
||||||
|
application_path = '/home/discuss.waarp.org/discourse'
|
||||||
|
directory application_path
|
||||||
|
environment 'production'
|
||||||
|
daemonize false
|
||||||
|
pidfile "#{application_path}/tmp/pids/puma.pid"
|
||||||
|
state_path "#{application_path}/tmp/pids/puma.state"
|
||||||
|
bind "unix://#{application_path}/tmp/sockets/puma.socket"
|
||||||
|
```
|
||||||
|
|
||||||
|
From there, the application can be run with the following command :
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bundle exec puma -C config/puma.rb
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, setup nginx to forward requests to Discourse. Create the file
|
||||||
|
`/etc/nginx/conf.d/discourse.conf` with the following content :
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
upstream discourse {
|
||||||
|
server unix:/path/to/discourse/tmp/sockets/puma.socket;
|
||||||
|
}
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 80;
|
||||||
|
server_name example.com;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
try_files $uri @proxy;
|
||||||
|
}
|
||||||
|
|
||||||
|
location @proxy {
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
proxy_pass http://discourse;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Your very own forum with Discourse is setup!
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Service Management
|
||||||
|
|
||||||
|
According to your workflow, you can add systemd units to run discourse.
|
||||||
|
It needs at least two service definition:
|
||||||
|
|
||||||
|
1. Sidekiq, which is used to process asynchronous background tasks
|
||||||
|
2. Rails, for Discource itself.
|
||||||
|
|
||||||
|
With the services setup, services can be started/stopped/enabled with
|
||||||
|
`systemctl` commands.
|
||||||
|
|
||||||
|
But before that, if you use RVM, you must create a wrapper for the
|
||||||
|
environment (local ruby, and optional gemset) used by Discourse:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rvm wrapper 2.3.0 systemd bundle
|
||||||
|
```
|
||||||
|
|
||||||
|
This creates an executable in `$rvm_bin_path` that you can call
|
||||||
|
in lieu of bundle that will automatically load the right envirnoment.
|
||||||
|
|
||||||
|
## Sidekiq
|
||||||
|
|
||||||
|
First, create a configuration for sidekiq. Create the file
|
||||||
|
`config/sidekiq.yml` in your discoure project with the following
|
||||||
|
content (for more info, see [Sidekiq's documentation]):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
:concurrency: 5
|
||||||
|
:pidfile: tmp/pids/sidekiq.pid
|
||||||
|
staging:
|
||||||
|
:concurrency: 10
|
||||||
|
production:
|
||||||
|
:concurrency: 20
|
||||||
|
:queues:
|
||||||
|
- default
|
||||||
|
- critical
|
||||||
|
- low
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, create the service unit for Sidekiq. Create the file
|
||||||
|
`/etc/systemd/system/discourse-sidekiq.service` with the
|
||||||
|
following content:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[Unit]
|
||||||
|
Description=discourse sidekiq service
|
||||||
|
After=multi-user.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
WorkingDirectory=/path/to/discourse
|
||||||
|
Environment=RAILS_ENV=production
|
||||||
|
ExecStart=/path/to/rvm/.rvm/bin/systemd_bundle exec sidekiq -C config/sidekiq.yml
|
||||||
|
Restart=always
|
||||||
|
RestartSec=10
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
Discourse
|
||||||
|
---------
|
||||||
|
|
||||||
|
For Discourse, just create the service unit for Puma. Create the file
|
||||||
|
`/etc/systemd/system/discourse.service` with the
|
||||||
|
following content:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[Unit]
|
||||||
|
Description=discourse service
|
||||||
|
After=discourse-sidekiq.service
|
||||||
|
Requires=discourse-sidekiq.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
WorkingDirectory=/path/to/discourse
|
||||||
|
Environment=RAILS_ENV=production
|
||||||
|
ExecStart=/path/to/rvm/.rvm/bin/systemd_bundle exec puma -C config/puma.rb
|
||||||
|
Restart=always
|
||||||
|
RestartSec=10
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
```
|
||||||
|
|
||||||
|
Upgrades
|
||||||
|
========
|
||||||
|
|
||||||
|
Upgrades are even easier:
|
||||||
|
|
||||||
|
First read the release notes.
|
||||||
|
Then make backups of the code and the database.
|
||||||
|
Now you can checkout the newest version:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /path/to/discourse
|
||||||
|
git checkout vX.X.X
|
||||||
|
```
|
||||||
|
|
||||||
|
Install the new dependencies, run the migrations and rebuild the
|
||||||
|
assets:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
RAILS_ENV=production bundle install
|
||||||
|
RAILS_ENV=production bundle exec rake db:migrate
|
||||||
|
RAILS_ENV=production bundle exec rake assets:precompile
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart Discourse:
|
||||||
|
|
||||||
|
~~~bash
|
||||||
|
systemctl restart discourse
|
||||||
|
~~~
|
||||||
|
|
||||||
|
What can go wrong? If if I do not give any solution here, it is always
|
||||||
|
recoverable (hence the backups!).
|
||||||
|
|
||||||
|
- The database migration failed (restore the database with your backup,
|
||||||
|
fix the problem and try again!)
|
||||||
|
- The plugins are not compatible with the latest version (rollback to
|
||||||
|
the previous working solution and wit for them to be compatible)
|
||||||
|
|
||||||
|
|
||||||
|
Plugins
|
||||||
|
=======
|
||||||
|
|
||||||
|
Discourse plugins can be handles the same way.
|
||||||
|
|
||||||
|
Installation
|
||||||
|
------------
|
||||||
|
|
||||||
|
Install the plugin with the url of its repository:
|
||||||
|
|
||||||
|
~~~bash
|
||||||
|
cd /path/to discourse
|
||||||
|
RAILS_ENV=production bundle exec rake plugin:install[URL]
|
||||||
|
~~~
|
||||||
|
|
||||||
|
Install the new dependencies, run the migrations and rebuild the
|
||||||
|
assets:
|
||||||
|
|
||||||
|
~~~bash
|
||||||
|
RAILS_ENV=production bundle install
|
||||||
|
RAILS_ENV=production bundle exec rake db:migrate
|
||||||
|
RAILS_ENV=production bundle exec rake assets:precompile
|
||||||
|
~~~
|
||||||
|
|
||||||
|
Restart Discourse:
|
||||||
|
|
||||||
|
~~~bash
|
||||||
|
systemctl restart discourse
|
||||||
|
~~~
|
||||||
|
|
||||||
|
Upgrade
|
||||||
|
-------
|
||||||
|
|
||||||
|
To upgrade a specific plugin, use the following command:
|
||||||
|
|
||||||
|
~~~bash
|
||||||
|
RAILS_ENV=production bundle exec rake plugin:update[ID]
|
||||||
|
~~~
|
||||||
|
|
||||||
|
You can also upgrade all plugins at once with the command:
|
||||||
|
|
||||||
|
~~~bash
|
||||||
|
RAILS_ENV=production bundle exec rake plugin:update_all
|
||||||
|
~~~
|
||||||
|
|
||||||
|
Then, install the new dependencies, run the migrations and rebuild the
|
||||||
|
assets:
|
||||||
|
|
||||||
|
~~~bash
|
||||||
|
RAILS_ENV=production bundle install
|
||||||
|
RAILS_ENV=production bundle exec rake db:migrate
|
||||||
|
RAILS_ENV=production bundle exec rake assets:precompile
|
||||||
|
~~~
|
||||||
|
|
||||||
|
and restart Discourse:
|
||||||
|
|
||||||
|
~~~bash
|
||||||
|
systemctl restart discourse
|
||||||
|
~~~
|
||||||
|
|
||||||
|
[Discourse]: http://www.discourse.org/
|
||||||
|
[Sidekiq's documentation]: https://github.com/mperham/sidekiq/wiki/Advanced-Options
|
||||||
|
[Puma's documentation]: https://github.com/puma/puma
|
Loading…
Reference in a new issue