Document made with KompoZer

BLOG:

Our laboratory Blog

Positions:

Post Doctoral Opening

Scholarships - Grants

Downloads:

**  Download Sensory Information Science Course
** Download Micro-Phenomena course
** Download Intro to Quantum Mechanics course

Thesis

** Download students Degree, Master and Doctor thesis

Members


Group photos


R. Micheletto

Publications

All Publications

Statistics

Flag Counter

test

Perception Experiments site

Perception experiments are located here !

2024

Execute SIESTA distributed on multiple machines

If you have many linux machines that can be connected with LAN cable, you can distribute SIESTA calculations on multiple CPUs using the mpi multi-processing feature. Here is how to do that.
  • Install the same version of Linux on all the machines (we suggest to use the old Ubuntu 16.04)
  • Create the exact same folder structure and install SIESTA in all of them (we suggest this version).
  • One machine will be the "master", go there and create a folder, for example call it "qTest".
  • In that folder copy all the .psf and .fdf files necessary to run your SIESTA simulation (for example these files are a quick SIESTA simulation test). Copy also your "siesta" executable, do not use symbolic links.
  • Test if your siesta installment is working correctly on the "master" machine. You should be in the qTest folder and use a command similar to this:
    mpirun -np 4 siesta out.dat ("4" is the number of threads of the master PC)
  • Repeat identical tests on all your machines and verify that all of them are OK with the test run.
  • Take a note of the ip address of all the machines, use the command
    ip addr in each of them.
  • You must setup ssh tunnels to allow communication between all the machines. Check that all of them have openssh-server installed. If not use: sudo apt-get install openssh-server
  • On the master machine generate a passwordless SSH key with:
    ssh-keygen -t rsa
  • Copy the key to all the other machines with:
    ssh-copy-id user@192.168.1.2
    where "user" is the username (use the same username for all machines) and the numbers are the IP address of the machines.
  • On each client, again create the passwordless SSH keys as above (ssh-keygen -t rsa), and copy those to all the machines as above (ssh-copy-id user@192.168.1.1)
  • Verify that the master can communicate with each machine and that each machine can communicate with the master and all the others. Use this code:
    ssh user@192.168.1.1
    you should be able to connect directly, without password.
  • Now you need to create a NFS sharable folder visible from all the PCs. First install on all the machines the nfs-kernel-server. Use this commands: sudo apt install nfs-kernel-server .
  • Go to the master machine and setup the sharable folder:
    sudo mkdir -p /mnt/nfs_share
    sudo chown nobody:nogroup /mnt/nfs_share
    sudo chmod 777 /mnt/nfs_share
    sudo nano /etc/exports
    the last line will edit the folder export settings: add this "/mnt/nfs_share 192.168.1.0/24(rw,sync,no_subtree_check)" to the exports file. The address "192.168.1.0/24" is the subnet you are using, it depends on how you setup your network. Please verify. Then restart the nfs kernel with
    sudo exportfs -a
    sudo systemctl restart nfs-kernel-server
    These settings are permanent, they will be kept even after a reboot
  • On the client machines you have to create the exact same folder structure and mount the remote nfs folder:
    sudo mkdir -p /mnt/nfs_share
    sudo mount 192.168.1.1:/mnt/nfs_share /mnt/nfs_share
    to make the mount permanent add this line to your /etc/fstab file:
    198.168.1.1:/mnt/nfs_share /mnt/nfs_share nfs defaults 0 0
    The ip address should be the address of your master machine.
  • Now you need to go to your master machine, get the folder where you tested SIESTA (qTest) and copy it inside your NFS share. You will have a folder named /mnt/nfs_share/qTest. Remove all the siesta output of the previous test. Create a file with name hosts.txt that contains the ip address of all the machines (included the master) and the number of threads available per machine. Use this format:
    192.168.1.1 slots=4
    192.168.1.2 slots=4
    192.168.1.3 slots=6
    in this example the numbers are the IP address of the machines. The first two machines are "i3" CPUs, the last is an "i5". Put the master machine at the top of the list (the list is order sensitive).
    Optional things you can do:
  • Verify that all the machines have the same date/time settings, and that there are no conflicts with the BIOS hardware clock settings. You can use these commands:
    sudo hwclock ; date # (to see the settings)
    sudo date --set="2024-06-26 10:05:59.990" # (to set the system date and time)
    sudo hwclock --systohc # (to align the system clock to the hardware clock)

    If you neet to correct the timezone use: sudo dpkg-reconfigure tzdata
  • In the master machine change directory to /mnt/nfs_share/qTest, where you have your siesta executable file and the .psf and .fdf files.
  • Launch the simulation with a command similar to this:
    mpirun --hostfile hosts.txt -np 14 siesta < name.fdf > out.dat &
    the switch "-np 14" tells mpi to run 14 parallel siesta processes, the sum of all the slots available in the three machines of this example.
    It should run correctly. If you have problems of speed, it could be caused by slow ethernet, verify that your hub is fast enough. A good monitoring tool is btop. Install it with sudo apt install btop.

    We hope this will help you to setup your mini-cluster!

    2022

    Temporary javascript test

    Mandelbrot test =>click<=

    2021

    How to install SIESTA on CRAY XC50




    Here is how we installed SIESTA on our CRAY XC50:

  • In a browser goto the siesta home page siesta. There you are directed to a git repository (here). There click "clone" and then "clone with HTTPS". Now you should have the address of the siesta repository in your PC clipboard.

  • Enter your CRAY area, create a folder where you want to work and change directory into it.

  • enter the command "git clone" and paste the address obtained from the git repository. Like the example below (the https address can be different):
    git clone https://gitlab.com/siesta-project/siesta.git

  • Now you have cloned the siesta repository in your working directory. Create a subdirectory called -for example- "obj1" and go into it.
    mkdir obj1
    cd obj1


  • Execute the initialization script with the command:
    sh ../Src/obj_setup.sh

  • Now you have populated the "obj1" folder with the necessary files for the compilation of siesta.

  • You need to create an "arch.make" file. You can modify for your needs the provided DOCUMENTED-TEMPLATE.make file (it is located in the ../Obj folder). Please note that you should not use MPI symbols in your arch.make file, CRAY will use its own MPI's symbols automatically. If you have a CRAY XC50, this is what worked for us: arch.make.

  • Now you need to compile. First, set all the CRAY's environment variables in the GNU mode
    moduoreggur le switch PrgEnv-Cray PrgEnv-gnu

  • Compile with the command
    make

    ( If your compilation stops with an error saying "This version of SIESTA cannot be identified" you need to go back to the main folder and execute git describe:
    cd ..
    git describe

    You will see a string beginning with "v" and other numbers representing the version. Create two files: SIESTA.release containing the string as it is, and SIESTA.version containing the string without the initial "v". Try again the "make" command. This time it should compile until the end)

  • After a while you will have the "siesta" executable ready in folder obj1. Go back in the main directory and create a new subfolder for your ab-inition calculations files and change into it:
    cd ..
    mkdir test1
    cd test1


  • Here you create a symbolic link to the siesta executable you just compiled:
    ln -s ../obj1/siesta. Also copy in here all the files necessary for your calculations (we use "scp" to do that).

  • To run your simulation, on the CRAY XC50, you need to create a launcher script. Please read your CRAY manual on the "aprun" section. For example, this is one of our scripts: launch.sh. Of course you have to modify it to fit your CRAY account name, folder setup, number of nodes and threads you want to use, etc etc.

    Good luck and we hope this helps!

    (If something goes wrong, you must read the SIESTA manual and your CRAY manual. A manual for XC50 is, for example, here.)

    NEST simulatorをGoogle Colaboratoryにインストールする方法
    How to install NEST in google COLABORATORY (in Japanese)


    横浜市立大学 ミケレット研究室
    吉田 瞬良

    Google ColaboratoryでNEST simulatorのverision2.xを使用する方法を説明します。(2021/06/18における、最新のバージョンは3.0であることに注意してください。)

    Google Colaboratoryを使うため、ローカルでの環境構築の必要がありません。したがって、WindowsでもVMなどの環境構築を行うことなく、簡単にNEST simulatorを使うことができます。

  • はじめに、Google Colaboratoryを開きます。(Google Colaboratoryの開き方がわからない場合は調べてください。)

  • Google Colaboratoryは、Linuxコマンドの先頭に!をつけることによって、コマンドの実行が可能です。したがって、コードセルに以下のようにコマンドを入力します。これらのコマンドはNEST simulatorの公式ページ"https://nest-simulator.readthedocs.io/en/v3.0/installation/index.html"に載っているコマンドを使用しています。

    !add-apt-repository ppa:nest-simulator/nest
    !apt-get update
    !apt-get install nest

    以上のコマンドを実行して少し経つと、「Press [ENTER] to continue or Ctrl-c to cancel adding it.」というメッセージがコードセルの下に現れ、メッセージの下に入力可能なボックスが見えると思います。そこに、Enterを入力すると、あとは自動的にnestがインストールされます。(約1分程度かかります)

    注意:!pip install nestというコマンドを打ってはいけません。nestという名の全く別のライブラリがインストールされてしまいます。

  • この時点では、nestをimportしようとしてもできません。なぜならば、importするときに探すディレクトリに、nestがインストールされているディレクトリが含まれていないためです。 そこで、nestがインストールされているディレクトリを以下のようにして追加します。手順1のコードセルとは別のコードセルで行ってください。

    import sys
    sys.path.append('/usr/lib/python3.6/dist-packages/')

    これを実行することによって、import nestが動くようになります。

  • 最後に確認として、以下のコードが動作すれば、Google Colaboratoryでnestを使うことができます。このコードは"https://nest-simulator.readthedocs.io/en/nest-2.20.1/getting_started.html#how-does-it-work"をもとにしています。

    import nest
    import nest.voltage_trace
    nest.ResetKernel()
    # Create the neuron models you want to simulate:
    neuron = nest.Create('iaf_psc_exp')
    # Create the devices to stimulate or
    #observe the neurons in the simulation:
    spikegenerator = nest.Create('spike_generator')
    voltmeter = nest.Create('voltmeter')
    # Modify properties of the device:
    nest.SetStatus(spikegenerator, {'spike_times': [10.0, 50.0]})
    # Connect neurons to devices and specify synapse (connection) properties:
    nest.Connect(spikegenerator, neuron, syn_spec={'weight': 1e3})
    nest.Connect(voltmeter, neuron)
    # Simulate the network for the given time in miliseconds:
    nest.Simulate(100.0)
    # Display the voltage graph from the voltmeter:
    nest.voltage_trace.from_device(voltmeter)
    nest.voltage_trace.show()

  • もし上の手順で失敗した場合は、以下の手順を試してみてください。(1,2の手順は成功したものとします。) まず、以下のコマンドを実行してください。

    !find /usr -name nest -type d

    すると、以下のように表示されると思います。

    /usr/lib/python3.x/dist-packages/nest
    /usr/share/doc/nest
    /usr/share/nest
    /usr/include/nest
    /usr/local/lib/python2.7/dist-packages/tensorflow_core/_api/v2/compat/v2/nest
    ...


    このとき、一番上の/usr/lib/python3.x/dist-packages/nestがインストールされたnestの場所を示しています。次に、importするときに探索されるディレクトリを確認するために以下のコードを実行します。

    import sys
    sys.path


    すると、以下のように表示されると思います。

    ['',
    '/content',
    '/env/python',
    '/usr/lib/python37.zip',
    '/usr/lib/python3.7',
    '/usr/lib/python3.7/lib-dynload',
    '/usr/local/lib/python3.7/dist-packages',
    '/usr/lib/python3/dist-packages',
    '/usr/local/lib/python3.7/dist-packages/IPython/extensions',
    '/root/.ipython']
    上のリストにnestのインストールされたディレクトリがなければ、調べたnestのディレクトリを参考にして、以下のように追加してください。

    sys.path.append('/usr/lib/python3.x/dist-packages/')
    このようにすれば、nestをimportできるようになっているはずです。

    例として、Google Colaboratoryに標準でインストールされているnumpyは、

    import numpy
    numpy.__path__

    とすると、以下のディレクトリにあることが確認できます。

    ['/usr/local/lib/python3.7/dist-packages/numpy']
    当然のことながら、このディレクトリはsys.pathの中に確認することができます。

    2018

    SIESTAをインストールの仕方、コンパイルと環境設定など。。。
    How to install SIESTA (in Japanese)



    In English and in Japanese (by Yusuke Fujii, 藤井祐輔)

    2017

    Install YCU "Security Check" on Ubuntu 16.04"

    In Yokohama City University we have a new proxy and security check procedure to access internet. Unfortunately, the instructions on the YCU's website are for Windows and Mac OS only. With the help of YCU's ICT center, we could figure out how to run the security check on linux too (^_^)/*.
    Here is a brief descrition on how to do that:

    In English: and in Japanese:
    (these URLs are restricted. They are visible only within YCU campus)

    Install nupic on Ubuntu


    We used Ubuntu 16.04 and nupic 0.6.0, presumably this procedure will work for your system too.
    1. NUPIC needs python (2.7), PIP (the python package manager), mysql (database server) and GIT (repository manager).
      sudo apt-get install python python-pip mysql-client mysql-server git
    2. Then install nupic with
      pip install nupic (without sudo)
    3. Verify what version of nupic you installed. You can use this command:
      python -c 'from pkg_resources import get_distribution; print "nupic:", get_distribution("nupic").version, "nupic.bindings:", get_distribution("nupic.bindings").version'
    4. Create a directory and download there the folders with the source files from Numenta's GIT repository
      git clone https://github.com/numenta/nupic.git
      Enter the nupic folder and fetch the correct version (for example 0.6.0):
      cd nupic/
      git fetch https://github.com/numenta/nupic
      git checkout tags/0.6.0

      (you know your nupic's version from previous step)
    5. Test your installation with
      py.test tests/unit (if it does not work, locate it with locate py.test and execute the test with /path/to/py.test /tests/unit)
    6. If you do not have errors, you are OK (in case: read the error messages and use numenta wiki or google to fix them).

    7. Nupic swarming uses your mysql database. Last step to be fully operational is to activate your mysql server. Use
      service mysql start
    8. then veryfy if you can log in with
      mysql -u root -p
      by default you should be able to login with root and an empty (return) password.
      If you can't, but have another mysql account that works, you can tell nupic to use that. Just go and change the login information in the nupic files. Do this
      pip show nupic
      this will show you where is nupic-default.xml. Make a copy of this file and call it nupic-site.xml (should be in the same folder of nupic-default.xml). Now edit nupic-site.xml and change the database mysql informations (change root to your username and the empty password to your mysql password).
      Now you should be able to login in your database with
      mysql -u yourusername -p
    You are ready to use nupic (try some examples, googleSearch).

    Install Siesta 4.1 on Linux Ubuntu 16.04

    This is a very brief guideline on how to install SIESTA (4.1) on your linux machine (tested on Ubuntu 16.04 and 20.04).

    Firstly execute sudo apt update and sudo apt upgrade. Then install the needed libraries, use the following commands

    for both Ubuntu 20.04 and 16.04
    sudo apt-get install build-essential checkinstall
    sudo apt-get install openmpi-common openmpi-bin libopenmpi-dev libnetcdf-dev netcdf-bin libnetcdf-dev libnetcdff-dev libscalapack-mpi-dev libblas-dev liblapack-dev
    sudo apt-get install openmpi-doc libopenmpi-dev libmpich-dev

    for Ubuntu 20.04 you need these:
    sudo apt-get install bscalapack-openmpi-dev libscalapack-mpich-dev

    Then do the following:
    1. download siesta 4.1 (link (or use the command: wget https://launchpad.net/siesta/4.1/4.1-b4/+download/siesta-4.1-b4.tar.gz)
    2. extract it (tar -xvf siesta-4.1-b4.tar.gz). You will find a folder named "Obj/" inside the main siesta directory.
    3. cd in that folder and then execute the setup script
      sh ../Src/obj_setup.sh
    4. then you need an arch.make file. You can create one starting from the DOCUMENTED-TEMPLATE.make file. This file should be in your current folder Obj. Edit this file accordingly to your computer architecture.
    5. Once you are ready, save it as arch.make. To compile just use the command
      make
    6. (If it does not compile, check the examples gfortran.make and intel.make. If you are using ubuntu you can attempt to use our arch.make, made for Ubuntu 16.04 (it is commented for other versions. In Ubuntu 20.04 probably you need the correct lapack library, generally it is located in /usr/lib/x86_64-linux-gnu/lapack, the version that worked for our ubuntu 20.04 it is zipped here. Copy the contents to the lapack folder.
      If still you cannot compile you have to read the error messages and the manual).
    7. Once the compiler finishes, you will have the executable "siesta" in the obj folder.
    8. Now you can work. Create and go in another folder. For example
      mkdir ../myWork
      cd ../myWork
    9. put your .fdf and .psf files there. Make also a symbolic link to the siesta executable, with this command:
      ln -s ../Obj/siesta
    10. then run your code. In parallel mode, the command is:
      mpirun --host localhost:4 -np 4 siesta <gan.fdf> out.dat
      (4 is the number of your processor threads)
    Hope it helps !

    2015

    Multiprocessing for biological neuronal network model, by Sun Zhe


    Multiprocessing is a standard library for Python and it is very easy to install into any systems (import multiprocessing). With this library, we can use multiple processors to calculate different processes at the same time.

    As an example to demonstrate the use of this library we realized a small neural network, in which each neuron is calculated in one processor independently. Multiprocessing can not only improve the speed performance, but it is a more exact and reliable approach for realistic neurons.
    Firstly, we use the Izhikevich model to emulate the individual neuron behavior. This model was proposed by Izhikevich and described in the paper.

    We made a python class for an individual neuron (Izhikevich model) IzhikevichClass (by Sun Zhe). Based on this class, we linked a chattering neuron 'm' with a fast spiking neuron 'n'. And we used electric synaptic connection, the difference of the neuron's potential was used as the stimulus signal. In our simulation, we used Gaussian White noise to emulate realistic neural noise, synaptic delay is defined as 0.1 ms.

    To simulate the neuron delay behavior and the stimulus from another neuron, we used module 'Value' and 'Array' in multiprocessing. With these two modules, numbers and arrays can be stored in shared memory. And in the modules, we can use 'd' and 'i' to indicate the double precision float and signed integer. For example, in our program we defined:

    km=Value('i',0)

    'km' is the loop index for the neuron 'm' that indicates how many loops have been processed. Then we create a process object for each neuron, target is a function to be invoked by start() method.

    p1 = Process(target=neuronM, args=(VsignalM,UsignalM,VsignalN,UsignalN,ww,SNR))

    The method start() will launch the process and the terminate() method will terminate it. Here is the example python file (MultiprocessingForNeuralNetwork.py), it uses the above class and the multiprocessing library. Resulting simulation is in the following figures, where we plotted variation of two coupled neurons. In figure 1 the synaptic strength is 0.1 and the in the figure 2 the value is 0.8. The red and green lines represent the first and second neuron action potentials calculated independently in the two processes.

    2014

    June 2014: LaTeX with Japanese fonts (linux)



    We installed Latex in our Ubuntu 14.04. It was difficoult for us to find instructions for the correct production of Japanese text. So, here we list the things you need to install to write in Japanese with LaTeX.

  • texlive-lang-cjk
  • texlive-publisher
  • texlive-metapost
  • latex-cjk-common
  • latex-cjk-japanese
  • latex-cjk-japanese-wadalab

    As a Latex Editor we used TexMaker. In Ubuntu you can find these packages through the package manager synaptic or using the command
    sudo apt-get install
    followed by the name of the package.

    When you have done the installation, you just have to compile your LaTeX file and Japanese should work. Remember that encoding is very important. So your editor should be set to save your LaTeX file in the correct encoding.
    Here there is an example file with the correct settings that should work, link.
    NOTICE:
    If you have a previous version of Ubuntu latex-cjk libraries are not in the repositories. If this is the case, first add the personal package archive (PPA):
    sudo apt-add-repository ppa:texlive-backports/ppa then update synaptic and follow the instruction as above.

    June 2014



    Master student T. Tsutsumi made a rare "one hour" continous measurementof the luminescence of InGaN material. The measure is done under UV light of 365 nm with a 400 nm filter.

    Very interestingly he discovered several unknown phenomena in the luminescence: local and fast blinking, slow long-term universal variation of luminosity, accumulation of light on blinking points that with time stabilize on confined luminous domains. These are a few phenomena he observed and he is currently studying under various conditions.

    This is a compressed real time video of the measurement: link (90 Mb, 352x288, H.26n). The video have been taken by a Olympus microscope with a sony high speed digital camera at 60 FPS.

    May 2014



    We installed pyQtGraph package in an Ubuntu 14.04 system. This is a publication quality graphic library for scientific data. We think that this library is better than the commonly used "pyplot" library.

    It is very simple to use, graphs are inherently interactive and are fitted with several tools like zooming, saving etc. These tools are similar to pyplot's ones, but more advanced, faster and easier to use.

    Example plots are here, a guide on how to install and test the package in linux is here.

    2013

    November 2013



    We installed LuxBlender (a physical optical renderer for the Blender 3D modeler engine) on Blender 2.68 on our Ubuntu 12.04 machines. Here is a step by step tutorial on how to do that.