Vagrant
For development purposes, we have Vagrant build files.
The Vagrant file was updated for IXP Manager v7.
The entire system is built from a fresh Ubuntu 24.04 installation via the tools/vagrant/bootstrap.sh
script. This also installs a systemd service to run tools/vagrant/startup.sh
on a reboot to restart the various services.
Quick Vagrant with VirtualBox
Note the developers use Parallels (see below) and have not tested on VirtualBox for sometime.
If you want to get IXP Manager with Vagrant and VirtualBox up and running quickly, follow these steps:
- Install Vagrant (see: https://developer.hashicorp.com/vagrant/install)
- Install VirtualBox (see: https://www.virtualbox.org/)
-
Clone IXP Manager to a directory:
git clone https://github.com/inex/IXP-Manager.git ixpmanager cd ixpmanager
-
Edit the
Vagrantfile
in the root of IXP Manager and delete theconfig.vm.provider "parallels" do |prl|
block and uncomment theconfig.vm.provider "virtualbox" do |vb|
. -
Spin up a Vagrant virtual machine:
vagrant up
Quick Vagrant with Parallels
- Install Vagrant (see: https://developer.hashicorp.com/vagrant/install). On MacOS:
brew tap hashicorp/tap brew install hashicorp/tap/hashicorp-vagrant
- Install Parallels (see: https://www.parallels.com/)
- Install the Parallels provider. E.g., on MacOS when Vagrant is installed via Homebrew:
vagrant plugin install vagrant-parallels
- Clone IXP Manager to a directory:
git clone https://github.com/inex/IXP-Manager.git ixpmanager cd ixpmanager
- Spin up a Vagrant virtual machine:
vagrant up
Next Steps - Access IXP Manager
-
Access IXP Manager on: http://localhost:8088/
-
Log in with one of the following username / passwords:
-
Admin user:
vagrant / Vagrant1
(api key:r8sFfkGamCjrbbLC12yIoCJooIRXzY9CYPaLVz92GFQyGqLq
) - Customer Admin:
as112 / AS112as112
- Customer User:
as112user / AS112as112
Vagrant Notes
Please see Vagrant's own documentation for a full description of how to use it fully.
-
To access the virtual machine that the above has spun up, just run the following from the
ixpmanager
directory:vagrant ssh
-
Once logged into the Linux machine, you'll find the
ixpmanager
directory mounted under/vagrant
. - You can
sudo su -
- You can access MySQL using
root/password
via:- Locally:
mysql -u root -ppassword ixp
- From the machine running Vagrant:
mysql -u root -ppassword -h 127.0.0.1 -P 33061
- Via phpMyAdmin on http://127.0.0.1:8088/phpmyadmin
- Locally:
- As mentioned above, the IXP Manager application is mounted under
/vagrant
in the Vagrant virtual machine. This is mounted as thevagrant
user. Any changes made on your own machine are immediately reflected on the virtual machine and vice-versa. - Apache runs as
vagrant
to avoid all file system permission issues.
Database Details
Spinning up Vagrant in the above manner loads a sample database from tools/vagrant/vagrant-base.sql
. If you have a preferred development database, place a bzip'd copy of it in the ixpmanager
directory called ixpmanager-preferred.sql.bz2
before step 5 above.
SNMP Simulator and MRTG
The Vagrant bootstrapping includes installing snmpsim making three "switches" matching those in the supplied database available for polling. The source snmpwalks for these are copied from tools/vagrant/snmpwalks
to /srv/snmpclients
and values can be freely edited there.
Example of polling when ssh'd into vagrant:
snmpwalk -c swi1-fac1-1 -v 2c swi1-fac1-1
snmpwalk -c swi1-fac2-1 -v 2c swi1-fac1-1
snmpwalk -c swi2-fac1-1 -v 2c swi2-fac1-1
As you can see, the community selects the source file - i.e., -c swi1-fac1-1
for /srv/snmpclients/swi1-fac1-1.snmprec
. The Vagrant bootstrap file adds these switch names to /etc/hosts
also.
The bootstrapping also configures mrtg to run and includes this in the crontab rather than using dummy graphs. The snmp simulator has some randomised elements for some of the interface counters.
Route Server / Collector / AS112 Testbed and Looking Glass
When running vagrant up
for the first time, it will create a full route server / collector /as112 testbed complete with clients:
- Route servers, collectors and AS112 bird daemons are started from hardcoded handles based on the Vagrant test database.
- Client router bird instances (dual-stack v4/v6) are generated and started based on their vlan interfaces as at the time the scripts are run.
All Bird instance sockets are located in /var/run/bird/
allowing you to connect to them using birdc -s /var/run/bird/xxx.ctl
.
In additional to this, a second Apache virtual host is set up listening on port 81 locally providing access to Birdseye installed in /srv/birdseye
. The bundled Vagrant database is already configured for this and should work out of the box. All of Birdseye's env files are generated via:
php /vagrant/artisan vagrant:generate-birdseye-configurations
Various additional scripts support all of this:
- The
tools/vagrant/bootstrap.sh
file which sets everything up. tools/vagrant/scripts/refresh-router-testbed.sh
will reconfigure all routers.tools/vagrant/scripts/as112-reconfigure-bird2.sh
will (re)configure and start, if necessary, the AS112 Bird instances.tools/vagrant/scripts/rs-api-reconfigure-all.sh
will (re)configure and start, if necessary, the route server Bird instances.tools/vagrant/scripts/rc-reconfigure.sh
will (re)configure and start, if necessary, the route collector Bird instances.
For the clients, we run the following:
mkdir -p /srv/clients
chown -R vagrant: /srv/clients
php /vagrant/artisan vagrant:generate-client-router-configurations
chmod a+x /srv/clients/start-reload-clients.sh
/srv/clients/start-reload-clients.sh
All router IPs are added to the loopback interface as part of the tools/vagrant/bootstrap.sh
(or the startup.sh
script on a reboot). There are also necessary entries in /etc/hosts
to map router handles to IP addresses. There are two critical Bird BGP configuration options to allow multiple instances to run on the same server and speak with each other:
strict bind yes;
multihop;