PRINCE2 project management with OpenProject

When managing complex projects, it is beneficial to use a project management methodology for guidance. PRINCE2 is one of the most popular and widely used methodologies available.

What is PRINCE2?

PRINCE2 (or Projects in Controlled Environments) offers a structured process for projects & provides recommendations for each project phase. It is one of the leading project management methodologies (next to PMBOK (from the Project Management Institute)) and used in over 150 countries.

Basic principles of PRINCE2

PRINCE2 provides a clear structure for projects and is based on 7 principles, 7 themes and 7 processes as described by

7 Principles

PRINCE2 is build on seven principles which represent guiding obligations and good practices.

The 7 Principles are:

  •  Continued Business Justification: A project must make good business sense (justified use of time and resources, clear return on investment).
  • Learn from Experience: Previous projects should be taken into account. Project teams use a lessons log for this purpose.
  • Define Roles and Responsibilities: The decision makers in the project are clearly defined. Everyone in the project knows what they and others are doing.
  • Manage by Stages: Difficult tasks are broken into manageable chunks, or management stages.
  • Manage by Exception: The project board is only informed if there is or may be a problem. As long as the product is running well, there is not a lot intervention from managers
  • Focus on Products: Everyone knows ahead of time what is expected of the product. Product requirements determine work activity.
  • Tailor to the Environment: The PRINCE2 methodology can be tailored and scaled. Projects which are adjusted based on the actual needs perform better in general than projects which use PRINCE2 dogmatically.


7 Themes

In addition to these 7 Principles, there are 7 Themes which are addressed continually throughout the project. They provide guidance for how the project should be managed. They are set up at the beginning of the project and then monitored continually to keep the project on track:

  • Business Case: This theme is used to determine if a project is worthwhile and achievable. It is related to the principle of Continued Business Justification.
  • OrganisationProject managers are required to keep a record every team member’s roles and responsibilities. It is related to the Define Roles and Responsibilities principle.
  • Quality: At the beginning of the project the project manager defines what constitutes the quality of the projects. This is related to the Focus on Products principle.
  • Plans: A plan is set up which describes how objectives are going to be achieved. It is focused on cost, quality, benefits, timescale and products.
  • Risk: Uncertain events during the project are identified, assessed and controlled. They are recorded in a risk log. Positive risks are called opportunities, negative risks are called threats.
  • Change: How to handle change requests and issues which arise during the project. Changes shouldn’t be avoided but they should be agreed on before they are executed.
  • Progress: This principle is about tracking the project. This allows project managers to verify and control if they are performing according to the project plan.


7 Processes

To structure the step-wise progression through a project, there are 7 Processes. Every one of the steps is overseen by the project manager and approved by the project board:

  • 1. Starting up a project
    • Create a project mandate to answer logistical questions about the project. It covers the purpose of the project, who will carry it out and how to execute it.
    • From the project mandate a project brief is derived, as well as lessons log and discussions with project members.
    • A project team is assigned.
  • 2. Initiating a project
    • During this stage project manager determines what needs to be done to complete the project and outlines how the performance targets will be managed (cost, time, quality, benefits, risks, scope)
  • 3. Directing a project
    • This is an ongoing process covering the entire life time of the project.
    • The project board manages activities such as initiation, stage boundaries, guidance, project closure.
  • 4. Controlling a stage
    • Project managers break the project into work packages / manageable activities and assigns them to the project members.
    • The project manager oversees and reports the work package progress.
  • 5. Managing product delivery
    • This manages how the communication between the team and the project manager is controlled.
    • The activities include accepting, executing and delivering work packages.
  • 6. Managing stage boundaries
    • The project manager and the board review every stage. The board decides whether to continue the project. The project manager records lessons learned with the team for the next stage.
    • This process includes
      • Planning the next stage
      • Updating the project plan
      • Updating the business case
      • Reporting the stage end or producing an exception plan
  • 7. Closing a project
    • In the final process the project is closed. This includes decommissioning the project, identifying follow-on actions, preparing project evaluation and benefits reviews, freeing up leftover resources and handing over products to the customer


Quickstart: Install SQL Server and create a database on Ubuntu

Install SQL Server

To configure SQL Server on Ubuntu, run the following commands in a terminal to install the mssql-server package.


If you have previously installed a CTP or RC release of SQL Server 2017, you must first remove the old repository before registering one of the GA repositories. For more information, see Change repositories from the preview repository to the GA repository.

  1. Import the public repository GPG keys:
    wget -qO- | sudo apt-key add -
  2. Register the Microsoft SQL Server Ubuntu repository:
    sudo add-apt-repository "$(wget -qO-"


    This is the Cumulative Update (CU) repository. For more information about your repository options and their differences, see Configure repositories for SQL Server on Linux.

  3. Run the following commands to install SQL Server:
    sudo apt-get update
    sudo apt-get install -y mssql-server
  4. After the package installation finishes, run mssql-conf setup and follow the prompts to set the SA password and choose your edition.
    sudo /opt/mssql/bin/mssql-conf setup


    If you are trying SQL Server 2017 in this tutorial, the following editions are freely licensed: Evaluation, Developer, and Express.


    Make sure to specify a strong password for the SA account (Minimum length 8 characters, including uppercase and lowercase letters, base 10 digits and/or non-alphanumeric symbols).

  5. Once the configuration is done, verify that the service is running:
    systemctl status mssql-server
  6. If you plan to connect remotely, you might also need to open the SQL Server TCP port (default 1433) on your firewall.

At this point, SQL Server is running on your Ubuntu machine and is ready to use!

Install the SQL Server command-line tools

To create a database, you need to connect with a tool that can run Transact-SQL statements on the SQL Server. The following steps install the SQL Server command-line tools: sqlcmd and bcp.

Use the following steps to install the mssql-tools on Ubuntu.

  1. Import the public repository GPG keys.
    curl | sudo apt-key add -
  2. Register the Microsoft Ubuntu repository.
    curl | sudo tee /etc/apt/sources.list.d/msprod.list
  3. Update the sources list and run the installation command with the unixODBC developer package.
    sudo apt-get update 
    sudo apt-get install mssql-tools unixodbc-dev


    To update to the latest version of mssql-tools run the following commands:

    sudo apt-get update 
    sudo apt-get install mssql-tools 
  4. Optional: Add /opt/mssql-tools/bin/ to your PATH environment variable in a bash shell.

    To make sqlcmd/bcp accessible from the bash shell for login sessions, modify your PATH in the ~/.bash_profile file with the following command:

    echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile

    To make sqlcmd/bcp accessible from the bash shell for interactive/non-login sessions, modify the PATH in the ~/.bashrc file with the following command:

    echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
    source ~/.bashrc


Sqlcmd is just one tool for connecting to SQL Server to run queries and perform management and development tasks. Other tools include:

Connect locally

The following steps use sqlcmd to locally connect to your new SQL Server instance.

  1. Run sqlcmd with parameters for your SQL Server name (-S), the user name (-U), and the password (-P). In this tutorial, you are connecting locally, so the server name is localhost. The user name is SA and the password is the one you provided for the SA account during setup.
    sqlcmd -S localhost -U SA -P '<YourPassword>'


    You can omit the password on the command line to be prompted to enter it.


    If you later decide to connect remotely, specify the machine name or IP address for the -S parameter, and make sure port 1433 is open on your firewall.

  2. If successful, you should get to a sqlcmd command prompt: 1>.
  3. If you get a connection failure, first attempt to diagnose the problem from the error message. Then review the connection troubleshooting recommendations.

Create and query data

The following sections walk you through using sqlcmd to create a new database, add data, and run a simple query.

Create a new database

The following steps create a new database named TestDB.

  1. From the sqlcmd command prompt, paste the following Transact-SQL command to create a test database:
  2. On the next line, write a query to return the name of all of the databases on your server:
    SELECT Name from sys.Databases
  3. The previous two commands were not executed immediately. You must type GO on a new line to execute the previous commands:

Insert data

Next create a new table, Inventory, and insert two new rows.

  1. From the sqlcmd command prompt, switch context to the new TestDB database:
    USE TestDB
  2. Create new table named Inventory:
    CREATE TABLE Inventory (id INT, name NVARCHAR(50), quantity INT)
  3. Insert data into the new table:
    INSERT INTO Inventory VALUES (1, 'banana', 150); INSERT INTO Inventory VALUES (2, 'orange', 154);
  4. Type GO to execute the previous commands:

Select data

Now, run a query to return data from the Inventory table.

  1. From the sqlcmd command prompt, enter a query that returns rows from the Inventory table where the quantity is greater than 152:
    SELECT * FROM Inventory WHERE quantity > 152;
  2. Execute the command:

Exit the sqlcmd command prompt

To end your sqlcmd session, type QUIT:


Connect from Windows

SQL Server tools on Windows connect to SQL Server instances on Linux in the same way they would connect to any remote SQL Server instance.

If you have a Windows machine that can connect to your Linux machine, try the same steps in this topic from a Windows command-prompt running sqlcmd. Just verify that you use the target Linux machine name or IP address rather than localhost, and make sure that TCP port 1433 is open. If you have any problems connecting from Windows, see connection troubleshooting recommendations.

For other tools that run on Windows but connect to SQL Server on Linux, see:

MSQL backup on linux

/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -Q “BACKUP DATABASE [MYDATABASE_DEV] TO DISK = N’/var/opt/mssql/data/MYDATABASE-20180615.bak’ WITH NOFORMAT, NOINIT, NAME = ‘MYDATABASE_STD50_DEV’, SKIP, NOREWIND, NOUNLOAD, STATS = 10”

MSQL on linux : Fix locale::facet::_S_create_c_locale name not valid

First, this report is not just about issue with mssql-docker but I suspect it is related to about mssql-tools for Linux in general, more specifically the sqlcmd. (Not sure where would be better and more accessible by others place to report it.)

I run docker container based on ubuntu:latest image and I install the mssql-tools in order to be able to run sqlcmd (I’m connecting to SQL Server in separate container).

Here is my Dockerfile

RUN apt-get -qy update && apt-get -qy install --no-upgrade --no-install-recommends \
        apt-transport-https \
        apt-utils \
        curl \

RUN curl | apt-key add -
RUN add-apt-repository "$(curl -s"

RUN ACCEPT_EULA=Y apt-get -qy install --no-upgrade --no-install-recommends \
        msodbcsql \
        mssql-tools \

RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc

Then I do docker exec -it bash and BANG!

root@2531848bc8e4:/# /opt/mssql-tools/bin/sqlcmd
terminate called after throwing an instance of 'std::runtime_error'
  what():  locale::facet::_S_create_c_locale name not valid

The SQL Server installation docs do not mention anything about specific locale required.

The only source of any hints is the Dockerfile-s like, which contain:

# install necessary locales
RUN apt-get install -y locales \
    && echo "en_US.UTF-8 UTF-8" > /etc/locale.gen \
    && locale-gen

If digs deeper, one can find this comment #8 (comment) for the seemingly unrelated issue #8

Clearly, this is a bug in the implementation of the mssql-tools, specifically the sqlcmd which should detect it runs in environment with incompatible locale and, obviously, print an informative message, not just terminate.

MSQL Pssdiag/Sqldiag Manager

What is Pssdiag/Sqldiag Manager?

Pssdiag/Sqldiag Manager is a graphic interface that provides customization capabilities to collect data for SQL Server using sqldiag collector engine. The data collected can be used by SQL Nexus tool which help you troubleshoot SQL Server performance problems. This is the same tool Microsoft SQL Server support engineers use to for data collection to troubleshoot customer’s performance problems.

Contact/Organization not showing to Non-Admin User

CREATE TABLE IF NOT EXISTS `vtiger_cv2group` (
`cvid` int(25) NOT NULL,
`groupid` int(25) NOT NULL,
KEY `vtiger_cv2group_ibfk_1` (`cvid`),
KEY `vtiger_groups_ibfk_1` (`groupid`)
CREATE TABLE IF NOT EXISTS `vtiger_cv2role` (
`cvid` int(25) NOT NULL,
`roleid` varchar(50) NOT NULL
`cvid` int(25) NOT NULL,
`rsid` varchar(255) NOT NULL
CREATE TABLE IF NOT EXISTS `vtiger_cv2users` (
`cvid` int(25) NOT NULL,
`userid` int(25) NOT NULL,
KEY `vtiger_cv2users_ibfk_1` (`cvid`),
KEY `vtiger_users_ibfk_1` (`userid`)

How to upgrade Vtiger 6.5 to 7.1 – Tutorial

Two ways to upgrade Vtiger 6.5 to 7.1

There are two ways of updating vtiger to a new version.

Option #1 – Upgrading Vtiger Using Patch

The first way is to use the patch provided by Vtiger. This method is very similar to earlier versions. You will need to backup your files and database, download the patch, extract it, and run the VTiger 7 migration wizard.

If you are upgrading from 6.5 to 7.1, you will need to run this process twice. First to upgrade vtiger 6.5 to 7.0 and after that repeat the process to upgrade vtiger 7.0 to 7.1

  1. Download the migration path from the SourceForge site. Patch 6.5 to 7  or Patch 7.0 to 7.1
  2. Unzip into Vtiger CRM Folder. A file called and migrate folder will be unpacked.
  3. Through browser open /migrate path http://yourcrmurl.tld/vtigercrm/migrate

Follow the instructions provided on the wizard.

If everything works as expected, you will see the confirmation screen.

Upgrade vtiger to 7.1

Option #2 – Upgrading Vtiger 6.5 to 7.1 Directly with a clean installation

The second option is to start with a clean installation of vtiger 7.1, connect your database and run the migration scripts to make it up to date.

This option, even if more complex, will let you upgrade from an older version (up to 5.4) directly to vtiger 7.1 Even something like 5.4 can be upgraded to vtiger 7.1 with this methodology. This method provides a significant increase instability and a big decrease in headaches.

upgrade vtiger 6 to vtiger 7

  1. Download and install a fresh copy of Vtiger 7.1 – Follow this tutorial if you need help with it
  2. In your old Vtiger disable all the custom modules and log out, do NOT close the tab.
  3. Create a copy of the old database
  4. Edit and replace the database base name, user and password to connect to the database you create in step #3
  5. Edit vtigerversion.php and replace 7.1.0 by your current vtiger version
  6. Copy your custom module folders to the new installation. Go to /modules/ and copy the custom modules folders to the new installation
  7. Copy your /storage folder to new installation
  8. Copy your /user_privileges folder to new installation
  9. Through browser open /migrate path http://yourcrmurl.tld/index.php?module=Migration&view=Index&mode=step1
  10. Edit vtigerversion.php to update the version 7.1.0
  11. Re-install the custom modules with the zip file provided by your vendor if you need to.

vtiger upgrade version 7

If everything works fine, you should have a clean install of Vtiger 7.1, with all your data and custom modules on it.

If you are a ‘do it yourself’ type of person and have enough knowledge about VTiger and its’ upgrade process – you should be able to do it yourself.

On the other hand, if you consider your data too valuable to take the risk, we at VGS Global can do it for you.


Nmap Cheat Sheet

Nmap has a multitude of options and when you first start playing with this excellent tool it can be a bit daunting. In this cheat sheet you will find a series of practical example commands for running Nmap and getting the most of this powerful tool.

Keep in mind that this cheat sheet merely touches the surface of the available options. The Nmap Documentation portal is your reference for digging deeper into the options available.

Nmap Target Selection

Scan a single IP nmap
Scan a host nmap
Scan a range of IPs nmap
Scan a subnet nmap
Scan targets from a text file nmap -iL list-of-ips.txt

These are all default scans, which will scan 1000 TCP ports. Host discovery will take place.

Nmap Port Selection

Scan a single Port nmap -p 22
Scan a range of ports nmap -p 1-100
Scan 100 most common ports (Fast) nmap -F
Scan all 65535 ports nmap -p-

Nmap Port Scan types

Scan using TCP connect nmap -sT
Scan using TCP SYN scan (default) nmap -sS
Scan UDP ports nmap -sU -p 123,161,162
Scan selected ports – ignore discovery nmap -Pn -F

Privileged access is required to perform the default SYN scans. If privileges are insufficient a TCP connect scan will be used. A TCP connect requires a full TCP connection to be established and therefore is a slower scan. Ignoring discovery is often required as many firewalls or hosts will not respond to PING, so could be missed unless you select the -Pn parameter. Of course this can make scan times much longer as you could end up sending scan probes to hosts that are not there.

Take a look at the Nmap Tutorial for a detailed look at the scan process.

Service and OS Detection

Detect OS and Services nmap -A
Standard service detection nmap -sV
More aggressive Service Detection nmap -sV –version-intensity 5
Lighter banner grabbing detection nmap -sV –version-intensity 0

Service and OS detection rely on different methods to determine the operating system or service running on a particular port. The more aggressive service detection is often helpful if there are services running on unusual ports. On the other hand the lighter version of the service will be much faster as it does not really attempt to detect the service simply grabbing the banner of the open service.

Nmap Output Formats

Save default output to file nmap -oN outputfile.txt
Save results as XML nmap -oX outputfile.xml
Save results in a format for grep nmap -oG outputfile.txt
Save in all formats nmap -oA outputfile

The default format could also be saved to a file using a simple file redirect command > file. Using the -oN option allows the results to be saved but also can be monitored in the terminal as the scan is under way.

Digging deeper with NSE Scripts

Scan using default safe scripts nmap -sV -sC
Get help for a script nmap –script-help=ssl-heartbleed
Scan using a specific NSE script nmap -sV -p 443 –script=ssl-heartbleed.nse
Scan with a set of scripts nmap -sV –script=smb*

According to my Nmap install there are currently 471 NSE scripts. The scripts are able to perform a wide range of security related testing and discovery functions. If you are serious about your network scanning you really should take the time to get familiar with some of them.

The option --script-help=$scriptname will display help for the individual scripts. To get an easy list of the installed scripts try locate nse | grep script.

You will notice I have used the -sV service detection parameter. Generally most NSE scripts will be more effective and you will get better coverage by including service detection.

A scan to search for DDOS reflection UDP services

Scan for UDP DDOS reflectors nmap –sU –A –PN –n –pU:19,53,123,161 –script=ntp-monlist,dns-recursion,snmp-sysdescr

UDP based DDOS reflection attacks are a common problem that network defenders come up against. This is a handy Nmap command that will scan a target list for systems with open UDP services that allow these attacks to take place. Full details of the command and the background can be found on the Sans Institute Blog where it was first posted.

HTTP Service Information

Gather page titles from HTTP services nmap –script=http-title
Get HTTP headers of web services nmap –script=http-headers
Find web apps from known paths nmap –script=http-enum

There are many HTTP information gathering scripts, here are a few that are simple but helpful when examining larger networks. Helps in quickly identifying what the HTTP service is that is running on the open port. Note the http-enumscript is particularly noisy. It is similar to Nikto in that it will attempt to enumerate known paths of web applications and scripts. This will inevitably generated hundreds of 404 HTTP responses in the web server error and access logs.

Detect Heartbleed SSL Vulnerability

Heartbleed Testing nmap -sV -p 443 –script=ssl-heartbleed

Heartbleed detection is one of the available SSL scripts. It will detect the presence of the well known Heartbleed vulnerability in SSL services. Specify alternative ports to test SSL on mail and other protocols (Requires Nmap 6.46).

IP Address information

Find Information about IP address nmap –script=asn-query,whois,ip-geolocation-maxmind

Gather information related to the IP address and netblock owner of the IP address. Uses ASN, whois and geoip location lookups. See the IP Tools for more information and similar IP address and DNS lookups.

Remote Scanning

Testing your network perimeter from an external perspective is key when you wish to get the most accurate results. By assessing your exposure from the attackers perspective you can validate firewall rule audits and understand exactly what is allowed into your network. This is the reason we offer a hosted or online version of the Nmap port scanner. To enable remote scanning easily and effectively because anyone who has played with knows very well how badly people test their perimeter networks.

Additional Resources

The above commands are just a taste of the power of Nmap. Check out the full set of features by running Nmap with no options. The creator of Nmap Fyodor has a book available that covers the tool in depth. You could also check out our Nmap Tutorial that has more information and tips.

Install Kopano on Debian 8

What is Kopano?

Kopano is a fork (a spin-off) of Zarafa. Some components have been adopted, others written from scratch. Kopano is open source, offers a wide range of functions and has a modular structure:

  • core (base for all other components)
  • webapp (fully equipped web GUI a la Outlook Web Access)
  • files (integration of various cloud services, for example: Owncloud / Nextcloud)
  • Web meetings
  • deskapp (Full desktop email client based on a modified Chromium browser! -> Send to … works! -> no Outlook!)
  • mdm (mobile device management)

Operating system platform

As operating system I have (as so often) Debian GNU Linux “Jessie” in use. The base of the installation is basically a LAMP installation. PHP should run as mod_php. Apart from that, there are of course some dependencies that need to be met when installing Kopano.

Sounds complicated, but it is not!

Of course, some other distributions are also supported, such as: OpenSuse, Ubuntu, Fedora, RHEL, SLE ….

If you like working “from scratch”, you can also compile everything yourself. However, this is “not my construction site” ;-).

Where do you get the packages from Kopano?

The community edition packages can be downloaded from the website . Each directory corresponds to a module (core, deskapp, files …).

Community Edition vs. subscription

If Kopano Community Edition is used, the packages are given in the form of a bleeding edge variant. The packages are nightly-builds.

The packages can / must be downloaded via wget . The installation is done by dpkg .

If a subscription is available , it will be more convenient, because then the Kopano repository can be integrated, which means that the entire package management can be done via apt-get .

Mobile phone synchronization Z-Push

Smartphone users are not too short. Per z-push , which is available for free, ActiveSync is implemented.

Installation LAMP

I’m not going to talk about installing Debian GNU Linux “from scratch” here. I assume that we start from a basic installation.

Since it is also necessary to provide a database and to edit various settings, I also install the phpmyadmin package .

apt-get install mysql-server apache2 phpmyadmin

MySQL facility

There is not much to say about that. Apt-get asks during the setup for a password for the MySQL superadmin “root”. You should definitely set a password here.

As a web server Apache2 is used. This should also be selected in the setup dialog.

Finally , the command mysql_secure_installation should be called. (Remove anonymous user (yes), Dissalid root login remotely (yes), remove Testdb (yes), Reload privilege tables now (yes).


For easier management of databases, phpmyadmin will be installed. Again, the setup is completed quickly. After completing the question “Configuring the database for phpmyadmin with dbconfig-common” with YES, entering the MySQL root password and confirming the question for a password for the phpmyadmin user with a return, the installation is complete.

Phpmyadmin should now be accessible via the Internet browser.

Thus, we have created a LAMP basis and can now dedicate ourselves to the Kopanoinstallation.

Install Kopano

Download packages

Kopano’s packages can be downloaded via wget if you do not have a subscription. (Note: subscribers can access the packages directly via apt-get.)

When working with wget, is the address of choice.

On the Linux server itself, you can create a directory to download the packages into, or you can access the / tmp directory.

cd / tmp






wget .0.20_16-Debian_8.0-all.tar.gz

Note: The file names change daily (version number -> da nightlies)

After downloading, the * .tar.gz files will be unpacked. Per unpacked file, a corresponding subdirectory is automatically created.

Unzip and install the Kopano-core

tar xfvz core-8.4.0 ~ 35_19.1-Debian_8.0-amd64.tar.gz

cd core-8.4.0 ~ 35_19.1-Debian_8.0-amd64

dpkg -i * .deb

The installation attempt fails because there are unresolved dependencies. In fact, you get the message: “Errors occurred while editing:”

Clean up unresolved dependencies

Apt would not be apt if there were no easy solution to this circumstance.

apt-get install -f

This command will “trace” all missing packets.

Kopano-core – second attempt

At the Instano of Kopano-Core no unresolved dependencies should be displayed anymore.

Kopano Webapp

When installing the webapp is now the same procedure, as in “Core”. Unpack the tag.gz file, change to the directory where the unpacked .deb files are located + install with dpkg.

tar xvfz webapp-
cd webapp-
dpkg -i * .deb

Again, missing dependencies will lead to problems during installation. Again, apt-get solves the problem.

apt-get -f install

Finally, the installation of the webapp is no longer a problem. (In the directory where the unpacked files of the webapp are):

dpkg -i * .deb

In order to activate the configuration of the Webbapp, the Apache web server must be restarted or the configuration must be read in again.

service apache2 reload


service apache2 restart

Is the webapp available?

This can be tested very easily using a browser.

http: // <FQDN or IP> / webapp

The basic installation of Kopano-Core and the webapp are now complete. But this is so far purely the frontend! The database connection is not yet available.

Set up MySQL database

By phpmyadmin this project is not too big a problem.

First, create a new database.

and a user

The conclusion is the assignment of user authorizations. (Database kopano / user kopano)

Customize the Kopano-Core configuration file

For Kopano to get something from the database, the configuration file has to be adjusted.

vim /etc/kopano/server.cfg

In the MYSQL SETTINGS section   (for database_engine = mysql) , the host, DB, DB user and DB password must be stored.

Start Kopano Server

The start of the server should now cause the previously created database to be filled with the necessary tables / fields.

First we stop a running Kopano server.

service copano-server stop

The start

service copano-server start

and the following status check, should give the following picture.

Kopano user

The Kopanobenutzer come in this configuration from the MySQL database and must be created via the Linux console.

Subsequently, a user “gestl” including mail address is created. The user is an administrator and therefore also receives a parameter along the way (-a1).

 kopano-admin -c gestl -p 12init34 -e -f “Daniel Gestl” -a1

Login to the webapp

In order to check whether the configuration work was successful, the login attempt is finally completed.

http: // <FQDN / IP> / webapp

As you can see, the installation is no magic. However, the integration of Postfix is ​​still missing, so that emails can be sent and received. (… and a little finetuning).

I will deal with that in a second article on “Kopano”.

PS: Kopano can do much more!