Telerobotics has the potential to facilitate the repair of satellites in geosynchronous orbit by allowing human operators to interact naturally with remote objects. Time delays on the order of seconds make it difficult to provide immersive feedback to the operator, motivating the use of predictive visual and haptic displays of the robot and environment. I will present recent work in developing a teleoperation framework that compensates for delays and provides the operator with useful visual and haptic feedback. This framework invokes a two-part environment model that predicts motion of objects in the environment, both in free space and during contact with the robot. Model-mediated teleoperation is used to provide stable haptic feedback based on this two-part model. Two experiments demonstrating the benefits of the approach will be presented. The first, catching a moving object under time delay, makes use of the free space environment prediction. For this experiment, robot employs a directional dry-adhesive gripper for robust grasping of the object. The second experiment, pushing an object, tests the environment and robot predictions during contact and in transitions between free space and contact. Results demonstrate the ability of the prediction algorithm to provide reliable feedback and improve operator performance before, during, and after robot-environment interactions