VMD is the visualization component of MDScope which is an interactive environment for molecular modelling and dynamics. As part of that project we are experimenting with new modes of human-computer interaction. Most programs use the mouse and menu user interface components, or even a dial box (which, by the way, VMD does not currently support - anyone want to give us one?) but these do not fully support the full range of control needed for modelling. We envision a system where a ligand can be picked up and inserted directly into an active site, or where one can point to a molecule and say ``which helix is that?''
This final product is still some time away from completion, but some work has been done to try out various possibilities. Following are short descriptions of the various methods being tried along with possible future directions. Since these are still very experimental, the documentation is sparse and only designed to give you an idea of the current state of development.
The standard VMD executable contains two three-dimensional components, called tools. Only two tools are available and they are hardwired to the first two sensors of the first enabled tracker. A tool can take on one of two types, a pointer or a grabber. In the future we will develop an external button input device to hook up to the trackers in order to provide controls separate from the keyboard. As it is now, we emulate the external device with the keys F1 through F4 for tool 0, while F5 through F8 are the equivalent keys for tool 1.
Each tool has a representation and several intrinsic properties: length, scale, detail, and offset. Changing the length controls the size of the tool's representation in only one direction, while changing the size uniformly alters the whole object. If the detail is 1, the represtation is very simple; often just a few lines. If the detail is 10, the object is drawn as a solid with many polygons and material characteristices. We have found that we only use a detail of 10 so we will not have this option in the future. Finally, the offset is used to translate the tool in the scene, perhaps to better locate the tool in the scene.
A pointer is just that; it points to different parts of the scene in 3D. It looks like a cylinder with a cone on the end. The pointer control can be in one of three states, length, size, and detail, which defines which value can be changed. Pressing F2 loops through the list by one, eg, from length to size to detail to length .... Pressing the F1 key alternates between either increasing or decreasing the current value. When released and pressed again, the direction will be reversed.
Our original intention was to make a 3D pop-up menu appear when the F3 key was pressed, and we wanted to emulate the 3 button pointing device used in the CAVE. From our experiments we have determined that more buttons should be used.
A grabber is used to pick up and move molecules, though the current implementation only controls translation. This tool looks like a cylinder with a cap at the end and is supposed to represent a bar magnet. It has only one control, F1, which turns the grabbing on or off.
We have done some work on interacting with objects inside the scene, though it is not compiled in the current version of the code. We have created boxes in space which, when ``clicked'' with a pointer, executed a text command. This allowed us to use the tools to do things like rotate the system. Working on this interface has brought up some design problems which imply the current tentative interface must totally be rewritten. In addition, we will probably build the new version as an extension to Tcl in a manner similar to Tk.
We are experimenting with other user interfaces, including ones based on speech and gesture recognition. Currently they only issue commands to VMD so instead of incorporating these programs directly into the code we have developed a simple mechanism to receive external text commands. It is based on the Tcl extension package, Tcl-DP, which adds commands to interface with standard socket communications. It allows other programs to send messages to VMD \ and be interpreted as if they originated from the keyboard. If the command returns information, that data is sent back to that calling process.
The external interface works by setting up VMD as a server for remote execution of a Tcl command (VMD is a Tcl-DP RPC server). Other processes can contact VMD by connecting to a port. The remote process sends a text command to VMD which interprets it and sends the result back to the remote process, which can do with it as it may.
The command external on starts VMD as a Tcl-DP RPC server. When an external process attempts to connect, the calling hostname is checked to see if it is allowed to run commands on the local machine. If so, the command is run. Note that external off does not disconnect currently attached processes, it only disallows new ones.
There is a simple security mechanism in the external command which derives from the standard Tcl-DP security. This allows or denys new connections based on the host name of the calling process and uses the command external host.
There are two ways to make a client that connects to VMD . The first is to use VMD itself. The command external connect <hostname> ??? will connect to a VMD process on the given machine. The process that started the contact is the client VMD and the one that was contacted is the server VMD . The other way is to use a Tcl based shell which has Tcl-DP compilied in. Source (or look at) the file ??? to see how the DP calls are made.
The client process sends a new command to the server with the command external send ???. Are the $variables expanded??? Is there another option???
In order to provide a more human interface to the capabilities of VMD , we are developing a (quasi) natural language interface. This will be coupled to a robust automatic speech recognition system to be developed by Dr. Yunxin Zhao of the University of Illinois. The interface will also utilize input from the mouse and 3D pointing devices (such as the experimental gesture recognition system). This will enable the user to control the most frequently used features of VMD without being tied to the keyboard. Even in the absence of a speech recognition system, the natural language interface will provide an alternative to VMD 's numerous forms without requiring the user to learn VMD 's sometimes cryptic text command structure. Those with an interest in the natural language interface may contact Jim Phillips <jim@ks.uiuc.edu> for additional information as it becomes available.
A hand-gesture interface is being developed in collaboration with Dr. Thomas Huang and Dr. Rajeev Sharma of the University of Illinois. The current implementation uses two cameras to find the position and orientation of a finger on a person's hand. The information about the gesture is sent as a text command to VMD \ which overrides the Text tracker coordinates. If a tool is connected to this tracker, the result is identical to using one of the UNC tracker input devices.
Future work will progress on two lines. In the near future we will improve the pointing interface to allow use to select molecules by pointing to them. After that we will try to recognize specific hand positions and gestures in order to develop a control language, and synchronize these gestures with spoken commands.
Some other possibilities for the external interface have come to mind.
proc vmdlog {text} { external send $text }
Remember that only the core VMD commands (???) can be logged, so only those are sent to the remote process.