LAM/MPI logo

LAM/MPI General User's Mailing List Archives

  |   Home   |   Download   |   Documentation   |   FAQ   |   all just in this list

From: Konrad Karczewski (xeno_at_[hidden])
Date: 2004-12-04 13:37:18


Is your X server listening for incomming connections at all? There is an
option switching it off (-nolisten tcp) and many distros use it by default
- it's a bit more secure this way...

best regards
Konrad Karczewski
Czestochowa University of Technology

On Sat, 4 Dec 2004, Elie Choueiri wrote:

> Just tried that, I keep getting:
> freeglut (my_prog): failed to open display '192.168.192.3:0.0'
>
> I'm thinking there's some extra security issue I'm missing as even this :
> lamexec C -x DISPLAY=<host_ip> xeyes doesn't work, and as Ryuta
> Suzuki explained to me, nothing else will :-/
>
>
> On Fri, 3 Dec 2004 16:06:06 -0700, Jeff Squyres <jsquyres_at_[hidden]> wrote:
> > It sounds like you are not setting or exporting your DISPLAY variable
> > correctly. Note that you need to use mpirun's -x option to export the
> > DISPLAY variable to remote processes (I mention this because your first
> > example just shows "DISPLAY=...", and doesn't list the -x option).
> >
> > Here's what I would do:
> >
> > 1. Check the value of your DISPLAY variable. It's typically ":0" or
> > ":0.0", or somesuch (i.e., it doesn't include the local IP address
> > because it's unnecessary from you're displaying on the localhost).
> > Take whatever the value is and prepend your host's public IP address to
> > it. So if the value is ":0.0", then reset it to be
> > "192.168.192.3:0.0".
> >
> > 2. Set your X server to accept connections from all the hosts that you
> > plan to run MPI processes on. Depending on your local setup, this may
> > be with xhost or xauth. For example: "xhost +192.168.192.4". Or, if
> > you're lazy (and within a private network where security isn't
> > tremendously important -- but check your local policies first!), just
> > use "xhost +" (which enables *any* host to write to your X display).
> >
> > 3. Then use "mpirun -x DISPLAY -np 4 my_prog".
> >
> > Hypothetically, this should work (I have done similar things in the
> > past).
> >
> > All this being said, you may or may not want to do this. Your parallel
> > application is going to be bottlenecked by the processes writing to a
> > remote X server (X uses a *lot* of network traffic). It may be faster
> > to have only one process -- the one on the local node -- write to the X
> > display. Specifically, have all the other MPI processes send whatever
> > data is necessary for the master process to write it to the local X
> > display.
> >
> > Hope that helps.
> >
> >
> >
> >
> > On Dec 2, 2004, at 9:09 AM, Elie Choueiri wrote:
> >
> > > Thanks for the help.
> > >
> > > I tried
> > > mpirun -np 4 DISPLAY=<host_ip:0.0> my_prog
> > > and the master didn't even open a glut window!
> > >
> > > freeglut (<my_prog>): failed to open display '192.168.192.3:0.0'
> > > ^^ Got that message twice, once for each process.
> > >
> > > But with mpirun -np 4 -x DISPLAY <my_prog> - i got the message once,
> > > and a window open for the master process..
> > >
> > > With the xterm example and using DISPLAY=<host_ip> as you suggested...
> > > gives this
> > > xterm Xt error: Can't open display: <host_ip>:0.0
> > >
> > > What am I doing wrong? :'(
> > >
> > > On Wed, 01 Dec 2004 15:25:19 -0600, Ryuta Suzuki
> > > <suzu0037_at_[hidden]> wrote:
> > >> You need to put something like
> > >>
> > >>>> mpirun -np 4 -x DISPLAY=host.xxx.xxx.xxx:0.0 your.program
> > >>
> > >> Just saying -x DISPLAY doesn't set the environment properly.
> > >>
> > >>
> > >>
> > >>
> > >> Elie Choueiri wrote:
> > >>
> > >>> Hi
> > >>>
> > >>> I've got an opengl program that uses glut (and creates a window
> > >>> using it).
> > >>> I'd like the program to create a window on each process, preferrably
> > >>> on their own machines.
> > >>> Running the program on one machine works fine, I even get the
> > >>> multiple
> > >>> windows up, so I'm almost convinced it's a security issue.
> > >>>
> > >>> Oh, and I'm running this on identical Fedora Core 2 machines..
> > >>>
> > >>> MPI is already set up (and works properly, btw)...
> > >>>
> > >>> So, running my program draws a window on the master process[or], but
> > >>> for the slaves - nothing.
> > >>>
> > >>> [esc00_at_fas70522 cc-parallel-rendering]$ mpirun -np 3 Catmull-Clark
> > >>> test-files/cube1.txt
> > >>> freeglut freeglut (Catmull-Clark): (Catmull-Clark): failed to open
> > >>> display ''failed to open display ''
> > >>>
> > >>> ---------------------------------------------------------------------
> > >>> --------
> > >>> One of the processes started by mpirun has exited with a nonzero exit
> > >>> code. This typically indicates that the process finished in error.
> > >>> If your process did not finish in error, be sure to include a "return
> > >>> 0" or "exit(0)" in your C code before exiting the application.
> > >>>
> > >>> PID 6246 failed on node n0 (192.168.192.3) due to signal 13.
> > >>> ---------------------------------------------------------------------
> > >>> --------
> > >>>
> > >>>
> > >>> I've tried running the FAQ example for getting xterm windows with the
> > >>> following results:
> > >>>
> > >>> [esc00_at_fas70522 cc-parallel-rendering]$ mpirun C -x DISPLAY
> > >>> run_xterm.csh Catmull-Clark
> > >>> Running xterm on fas70533.cs.aub.edu.lb
> > >>> Running xterm on fas70532.cs.aub.edu.lb
> > >>> Running xterm on fas70522.cs.aub.edu.lb
> > >>> Xlib: connection to ":0.0" refused by server
> > >>> Xlib: No protocol specified
> > >>>
> > >>> xterm Xt error: Can't open display: :0.0
> > >>> ---------------------------------------------------------------------
> > >>> --------
> > >>> It seems that [at least] one of the processes that was started with
> > >>> mpirun did not invoke MPI_INIT before quitting (it is possible that
> > >>> more than one process did not invoke MPI_INIT -- mpirun was only
> > >>> notified of the first one, which was on node n0).
> > >>>
> > >>> mpirun can *only* be used with MPI programs (i.e., programs that
> > >>> invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
> > >>> to run non-MPI programs over the lambooted nodes.
> > >>> ---------------------------------------------------------------------
> > >>> --------
> > >>>
> > >>>
> > >>> I've already said xhost +<slave_ips> on the master and even run xhost
> > >>> +<master_ip> on each of the slaves.
> > >>>
> > >>> Help please?
> > >>> _______________________________________________
> > >>> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
> > >>>
> > >>>
> > >>
> > >>
> > > _______________________________________________
> > > This list is archived at http://www.lam-mpi.org/MailArchives/lam/
> > >
> >
> > --
> > {+} Jeff Squyres
> > {+} jsquyres_at_[hidden]
> > {+} http://www.lam-mpi.org/
> >
> > _______________________________________________
> >
> >
> > This list is archived at http://www.lam-mpi.org/MailArchives/lam/
> >
> _______________________________________________
> This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>