yes, I've run that xhost + <ip> line..
I tried just running a simple mpi program that just uses XOpenDisplay
to get a window, with the same results, so I'm pretty sure it's a
security issue I'm missing...
On Thu, 02 Dec 2004 16:51:39 +0000, Nelson de Brito <ntbrito_at_[hidden]> wrote:
> are you allowing others to access the xserver in your "target" machine
> (x server/client sides are something that i don't want to understand)?
> On your x_server do
> xhost + ip_address_x_cliente
>
>
> Nelson de Brito
> http://www.fc.up.pt/pessoas/ntbrito
>
>
>
>
> Elie Choueiri wrote:
> > Thanks for the help.
> >
> > I tried
> > mpirun -np 4 DISPLAY=<host_ip:0.0> my_prog
> > and the master didn't even open a glut window!
> >
> > freeglut (<my_prog>): failed to open display '192.168.192.3:0.0'
> > ^^ Got that message twice, once for each process.
> >
> > But with mpirun -np 4 -x DISPLAY <my_prog> - i got the message once,
> > and a window open for the master process..
> >
> > With the xterm example and using DISPLAY=<host_ip> as you suggested...
> > gives this
> > xterm Xt error: Can't open display: <host_ip>:0.0
> >
> > What am I doing wrong? :'(
> >
> > On Wed, 01 Dec 2004 15:25:19 -0600, Ryuta Suzuki <suzu0037_at_[hidden]> wrote:
> >
> >>You need to put something like
> >>
> >> >> mpirun -np 4 -x DISPLAY=host.xxx.xxx.xxx:0.0 your.program
> >>
> >>Just saying -x DISPLAY doesn't set the environment properly.
> >>
> >>
> >>
> >>
> >>Elie Choueiri wrote:
> >>
> >>
> >>>Hi
> >>>
> >>>I've got an opengl program that uses glut (and creates a window using it).
> >>>I'd like the program to create a window on each process, preferrably
> >>>on their own machines.
> >>>Running the program on one machine works fine, I even get the multiple
> >>>windows up, so I'm almost convinced it's a security issue.
> >>>
> >>>Oh, and I'm running this on identical Fedora Core 2 machines..
> >>>
> >>>MPI is already set up (and works properly, btw)...
> >>>
> >>>So, running my program draws a window on the master process[or], but
> >>>for the slaves - nothing.
> >>>
> >>>[esc00_at_fas70522 cc-parallel-rendering]$ mpirun -np 3 Catmull-Clark
> >>>test-files/cube1.txt
> >>>freeglut freeglut (Catmull-Clark): (Catmull-Clark): failed to open
> >>>display ''failed to open display ''
> >>>
> >>>-----------------------------------------------------------------------------
> >>>One of the processes started by mpirun has exited with a nonzero exit
> >>>code. This typically indicates that the process finished in error.
> >>>If your process did not finish in error, be sure to include a "return
> >>>0" or "exit(0)" in your C code before exiting the application.
> >>>
> >>>PID 6246 failed on node n0 (192.168.192.3) due to signal 13.
> >>>-----------------------------------------------------------------------------
> >>>
> >>>
> >>>I've tried running the FAQ example for getting xterm windows with the
> >>>following results:
> >>>
> >>>[esc00_at_fas70522 cc-parallel-rendering]$ mpirun C -x DISPLAY
> >>>run_xterm.csh Catmull-Clark
> >>>Running xterm on fas70533.cs.aub.edu.lb
> >>>Running xterm on fas70532.cs.aub.edu.lb
> >>>Running xterm on fas70522.cs.aub.edu.lb
> >>>Xlib: connection to ":0.0" refused by server
> >>>Xlib: No protocol specified
> >>>
> >>>xterm Xt error: Can't open display: :0.0
> >>>-----------------------------------------------------------------------------
> >>>It seems that [at least] one of the processes that was started with
> >>>mpirun did not invoke MPI_INIT before quitting (it is possible that
> >>>more than one process did not invoke MPI_INIT -- mpirun was only
> >>>notified of the first one, which was on node n0).
> >>>
> >>>mpirun can *only* be used with MPI programs (i.e., programs that
> >>>invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
> >>>to run non-MPI programs over the lambooted nodes.
> >>>-----------------------------------------------------------------------------
> >>>
> >>>
> >>>I've already said xhost +<slave_ips> on the master and even run xhost
> >>>+<master_ip> on each of the slaves.
> >>>
> >>>Help please?
> >>>_______________________________________________
> >>>This list is archived at http://www.lam-mpi.org/MailArchives/lam/
> >>>
> >>>
> >>
> >>
> > _______________________________________________
>
>
> > This list is archived at http://www.lam-mpi.org/MailArchives/lam/
>
|