Input management with Dependency Injection

Object oriented programming has a lot of patterns that can be very useful for making games. One of those patterns is the Dependency Injection, a pattern that helps to decouple classes that would otherwise be tightly connected. So let’s take something that’s really connected and see how dependency injection can help us: the input management.

Wait what’s this Dependency Injection?


Usually if you have a thing (call it client) that uses another thing (call it service), when you change the service, then you have to also change the client. And that’s bad. Let’s say the client is your game logic and you are porting your game from pc to mobile, and that therefore you need to switch from a keyboard + mouse input to a touch one. Since all inputs are changed (perhaps radically since your WASD is now a UI element) you now need to change some input-read line in your game logic even if you used an intermediate class to get those button inputs.

The Dependency Injection way to do it instead is to have the input manager call the game logic functions. Without it knowing whose functions they are. You just set them as callbacks and call them when needed. Who sets the callbacks? The naive option is: the client. But then you still have a direct dependency between the classes. Enter the DIC: Dependency Injection Container. He takes the callbacks from the client and gives them to the service, thus eliminating the dependency between them (and adding another class to your code, that’s not a free lunch).

And what are those de-leee-gates?

Delegate

A delegate is just a way to pass a function as an argument, it can also be stored as a variable and given a type name to be checked so that only the functions that match a certain signature can be stored or passed as a delegate of a specific type.

Let’s read some Input!

    [SerializeField]
    string XbuttonName = "Fire1";
   
// other button names 

    [SerializeField]
    string LeftStickHorizontalName = "Horizontal";
    [SerializeField]
    string LeftStickVerticalName = "Vertical";
//other axis names

First of all we’ll need the names of the input buttons and axis we’re going to read, for this example I’ve used a regular xbox controller. We’ll do this with the old unity input system, not the (currently) experimental one, so we’ll need a string name for it. If you’ve read my other tutorials you know I’ve a personal feud with strings, but this is one of the few cases you really have to use them: if you are building an input manager you don’t want to force whoever uses it to edit code just to rename an input field, so you really want to have that in the inspector, which means a serialized string. Notice that for thumbsticks we’ll need two axis per stick, so two thumbsticks means four axis.

    public static InputManager instance;

    [SerializeField]
    InputManagerDIC inputDIC;

    [SerializeField]
    float triggerSensibility = 0.2f;

As for the other variables, the instance reference will be used to make this class a singleton, the inputDIC is needed to ask for the injection, and the trigger sensibility trashold will be used to get a button behaviour from an axis, because back in my days triggers were fucking buttons and I like it that way.

public delegate void buttonReaction();
public delegate void axisEffect(Vector2 axisVal);

Although we could make this all with predefined System Actions, I’d rather estabilish a more specific interface that reminds whoever writes the game logic code what is supposed to act as a button and what is supposed to act as an axis. It’s just a reminder, nothing more.

good old controller
good old controller
    public static buttonReaction XbuttonPress = delegate () { };
    //other press callbacks ...
    public static buttonReaction XbuttonPressContinuous = delegate () { };
    //other continuous callbacks 
    public static axisEffect leftStickEffect = delegate (Vector2 a) { };
    public static axisEffect rightStickEffect = delegate (Vector2 a) { };
    public static System.Action InputStartRead = delegate () { };

Each callback is initialized to an empty delegate because if for whatever reason we don’t want to use something, we don’t want a nullreference exception to pop out after the change.

Now, we can define a lot of callbacks for each Input since every button has four relevant conditions:

  • just pressed
  • pressed (continuously)
  • just released
  • released (continuously)

In this example I’ll use four buttons and the triggers and read only two condition for the buttons (just pressed and continuous press) and one for the triggers (continuous press), for each of the conditions I want to read I need to define a callback.

The same goes for what to do with thumbsticks, but in that case I just want to read a direction out of them and let the game logic interpret it.

The last callback isn’t really needed but for this tutorial I’ve also built a public repository where you can download a test scene and I need to clean the UI state at the beginning of every frame, so I want a callback for that too.

void Awake()
    {
        if (instance == null)
            instance = this;
        else
            Destroy(gameObject);
        inputDIC.LoadInputManager();
    }

As I said before this is going to be a Singleton. And at the beginning of execution we want the DIC to inject his callbacks in the InputManager, so we’ll call his loading function here.

    void Update()
    {
        InputStartRead();
        if (Input.GetButtonDown(XbuttonName))
        { XbuttonPress(); }
        //read other buttonDowns
        if (Input.GetButton(XbuttonName))
        { XbuttonPressContinuous(); }
        //read other buttons
        if (Input.GetAxis(leftTriggerName) > triggerSensibility)
        { leftTriggerPressContinuous(); }
        if (Input.GetAxis(rightTriggerName) > triggerSensibility)
        { rightTriggerPressContinuous(); }

        leftStickEffect(new Vector2(Input.GetAxis(LeftStickHorizontalName), Input.GetAxis(LeftStickVerticalName)));
        rightStickEffect(new Vector2(Input.GetAxis(RightStickHorizontalName), Input.GetAxis(RightStickVerticalName)));
    }

And at last here’s the action. At first we call the “start reading” callback, then for each button we check the relevant states. Notice that for the trigger we read an axis input and only when it’s over the trashold we’ve set before we call a callback just as if it were a regular button. From the game logic standpoint that trigger will be undistinguishable from a button, it even uses the same delegate type for the callback. For the thumbsticks instead we’ll read the two axis in a single Vector2 variable and use that to call the appropriate axisEffect callback.

How about a UI class for testing this?

a really simple ui
a really simple ui

I’ve made it as basic as it gets, sorry but no fancy stuff here:

    [SerializeField]
    Toggle xButton;
    //other toggles
    [SerializeField]
    Text rStick;
    //other texts

For each button I’ll set a toggle on and off, while for the sticks I’ll show the direction in a text. All the references are passed with serialized fields in the inspector.

    public void LogCallTLCont() { ShowLogButton(lTriggerButton, "TL Cont"); }
    public void LogCallTRCont() { ShowLogButton(rTriggerButton, "TR Cont"); }
    public void LogCallA() { ShowLogButton(aButton, "A "); }
    public void LogCallB() { ShowLogButton(bButton, "B "); }
    public void LogCallX() { ShowLogButton(xButton, "X "); }
    public void LogCallY() { ShowLogButton(yButton, "Y "); }
    public void LogCallACont() { ShowLogButton(aButton, "A Cont"); }
    public void LogCallBCont() { ShowLogButton(bButton, "B Cont"); }
    public void LogCallXCont() { ShowLogButton(xButton, "X Cont"); }
    public void LogCallYCont() { ShowLogButton(yButton, "Y Cont"); }
    public void LogCallL(Vector2 direction) { ShowLogAxis(lStick, "L stick with dir", direction); }
    public void LogCallR(Vector2 direction) { ShowLogAxis(rStick, "R stick with dir", direction); }

    void ShowLogButton(Toggle toggle, string text)
    {
        toggle.isOn = true;
        Debug.Log(text);
    }

    void ShowLogAxis(Text field, string text, Vector2 direction)
    {
        field.text = direction.ToString();
        Debug.Log(text + direction);
    }

All the callbacks are actually using the same couple of functions, logging and setting an UI element each time. But who’s going to reset all those toggles when we didn’t read the button’s release? Our reset function of course:

    public void ResetUI()
    {
        xButton.isOn = false;
        yButton.isOn = false;
        aButton.isOn = false;
        bButton.isOn = false;
        lTriggerButton.isOn = false;
        rTriggerButton.isOn = false;
        rStick.text = Vector2.zero.ToString();
        lStick.text = Vector2.zero.ToString();
    }

 It’s Injection time

dependency injection input time
dependency injection input time

Also the DIC is really simple, all it does is to set the callbacks in the InputManager, so it only needs a load function and a field to specify from which class instance it should take the callbacks:

    [SerializeField]
    UserExample target;
    public void LoadInputManager()
    {
        InputManager.XbuttonPress = target.LogCallX;
        InputManager.YbuttonPress = target.LogCallY;
        InputManager.AbuttonPress = target.LogCallA;
        InputManager.BbuttonPress = target.LogCallB;
        InputManager.XbuttonPressContinuous = target.LogCallXCont;
        InputManager.YbuttonPressContinuous = target.LogCallYCont;
        InputManager.AbuttonPressContinuous = target.LogCallACont;
        InputManager.BbuttonPressContinuous = target.LogCallBCont;
        InputManager.leftStickEffect = target.LogCallL;
        InputManager.rightStickEffect = target.LogCallR;
        InputManager.leftTriggerPressContinuous = target.LogCallTLCont;
        InputManager.rightTriggerPressContinuous = target.LogCallTRCont;
        InputManager.InputStartRead = target.ResetUI;

    }

So, as you can see the InputManager has no dependecy towards the client class and the UserExample doesn’t even know that his functions are linked to an input. Any maintenance change on either class will stop here in the DIC and will be as trivial as just changing wich callback is assigned to what variable since that’s all that can happen here.

But what if I just changed Input Settings instead of doing all that?

That’s cool and that’s also the proper way to do it (until you are not porting from pc/console to mobile). Really, until you are not changing between radically different input sources in unity, you’re better off using Unity3d’s input system to remap controls and avoid changing code. I only used the Input management as the easiest-to-explain example, if one thinks this technique is just for that, he’s totally missing the point. This technique can (and according to some people should) be used for absolutely everything.

That’s all folks

Thanks for the read. This time no copy-paste, you get a repository with the whole project already set up and ready to use here. If you have any questions or comments please do express that either in the comments here or just hit me on twitter. And if you don’t want to lose my future stuff, consider my newsletter.

P.S.: I’m currently looking for a job, if you are interested take a look at my portfolio.

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Catching Swipes – a touch screen tutorial

One of the hurdles of dealing with touch screens is to handle finger tracking and boiling it down to a clean cut movement so that one can process gesture recognition.

Let’s look how to make a component that:

  1. only gets swipes in a specific screen area
  2. can detect multiple touches and store their individual details
  3. measures movement since last frame

Touch interface input – variables needed

For this we’ll need a couple of things in our data:

    public GameObject toCheck;
    Dictionary<int, Vector2> lastTouchPositionByFinger = new Dictionary<int, Vector2>();
    Vector2 lastClickPositionByMouse;
    Vector2 currentMousePos;
    [SerializeField]
    Vector2 netSwipe;
    public Vector2 NetSwipe
    {
        get { return netSwipe; }
    }

First of all we get a GameObject we’ll use to define the area in which the swipes shall be catched, it must be a UI element that can be Raycasted.
The next item will be a dictionary in which to store the current position of every single tracked finger. Also, since on Windows 10 computer touch screens are treated as mouse input we’ll need an extra place for mouse input position. And at last, of course, we have a property which will contain our movement since last frame inside the area.

Notice: there are up to 10 fingers and just one net movement. What we will do will be to have the total movement of the frame in this netSwipe variable, if you need to track the fingers independently, you will need to change this approach.

How to check if touch is over a UI element

So, how do we implement our first requisite? We just ask the event system if a finger is over the image (which of course can be made invisible by setting a 0 in its alpha channel).

    private static List<RaycastResult> tempRaycastResults = new List<RaycastResult>();

    public bool PointIsOverUI(Vector2 position)
    {

        var eventDataCurrentPosition = new PointerEventData(EventSystem.current);

        eventDataCurrentPosition.position = position;

        tempRaycastResults.Clear();

        EventSystem.current.RaycastAll(eventDataCurrentPosition, tempRaycastResults);
        foreach (var item in tempRaycastResults)
        {
            if (item.gameObject.GetInstanceID() == toCheck.GetInstanceID() && item.index == 0)
                return true;
        }
        return false;
    }

We’ll need an extra list for this function, to store all the results of the Raycast. At first we poll the EventSystem for a RaycastAll , then we go through every hit object and only if the object we hit is the one we’re looking to hit and only if there is no other object over it we give true as an answer. Otherwise we’re not hitting the right item and the touch should not be contributing to net swipe.

How to process a touch input in Unity3d

Now we can check if the touch should be counted or not, but that’s not enough. We also need to check for the touch phase in which we are and put tracking in place, let’s start with the Update function’s structure:

 void Update()
    {
        if (Input.touchCount > 0)
        {
            for (int i = 0; i < Input.touchCount; i++)
            {
                var touch = Input.GetTouch(i);
                switch (touch.phase)
                {
                    case TouchPhase.Began:
                        //stuff
                        break;
                    case TouchPhase.Moved:
                        //moar stuff
                        break;
                    case TouchPhase.Ended:
                        //stuff again
                        break;
                    case TouchPhase.Canceled:
                        //last stuff
                        break;
                    default:
                        break;
                }
            }
        }

    }

The first thing to check, of course, is if there are any touches at all. If there aren’t, then we don’t want to waste any resources. Then we cycle through each touch and do stuff appropriately.

Let’s get to the details. Before even starting our check we want to ensure that if no touches are active our netSwipe is reset to zero, so:

void Update()
    {
        netSwipe = Vector2.zero;
        if (Input.touchCount > 0)
        {

Getting to the actual stuff, when the touch begins and ends or gets cancelled we just want to add or remove it from our tracking accordingly, so we have:

                   case TouchPhase.Began:
                         lastTouchPositionByFinger.Add(touch.fingerId, touch.position);
                        break;

and:

                    case TouchPhase.Ended:
                        lastTouchPositionByFinger.Remove(touch.fingerId);
                        break;
                    case TouchPhase.Canceled:
                        lastTouchPositionByFinger.Remove(touch.fingerId);
                        break;

When the finger is moving, instead, is where the fun part happens:

                    case TouchPhase.Moved:
                        if (PointIsOverUI(touch.position))
                        {
                            netSwipe += touch.position - lastTouchPositionByFinger[touch.fingerId];
                            lastTouchPositionByFinger[touch.fingerId] = touch.position;
                        }
                        break;

First we check if the finger is over the area of interest, then we add its movement since last frame to our  netSwipe variable and update the finger’s position in our dictionary and that’s it! You now have also the tracking and net movement over the image.

Dealing with Windows 10

        if (Input.GetMouseButton(0))
        {
            currentMousePos = Input.mousePosition;
            if (PointIsOverUI(currentMousePos))
                netSwipe += (currentMousePos - lastClickPositionByMouse);

            lastClickPositionByMouse = currentMousePos;
        }

As I said before, on Windows 10 laptops touchscreens touches are read as mouse input. In that case we’re going to make an exception ad use a special treatment. The logic is still the same, but without the tracking of a finger touch. Simply put, if the button is down, then we check if it’s over the UI element we’re tracking and if so we use current position versus last frame’s one to contribute to netswipe.

That’s all folks!

Now go out there and do something awesome with that. I’ll post soon enough a tutorial that makes use of this component to control a gallery of sliding images. Join my newsletter to never miss my stuff and hit me up on Twitter for any remarks!

And of course here’s the script in a copypaste-friendly format:

using System.Collections.Generic;
using UnityEngine;
using UnityEngine.EventSystems;

public class CatchSwipe : MonoBehaviour
{

    public GameObject toCheck;
    Dictionary<int, Vector2> lastTouchPositionByFinger = new Dictionary<int, Vector2>();
    Vector2 lastClickPositionByMouse;
    Vector2 currentMousePos;
    [SerializeField]
    Vector2 netSwipe;
    public Vector2 NetSwipe
    {
        get { return netSwipe; }
    }

    void Update()
    {
        netSwipe = Vector2.zero;
        if (Input.GetMouseButton(0))
        {
            currentMousePos = Input.mousePosition;
            if (PointIsOverUI(currentMousePos))
                netSwipe += (currentMousePos - lastClickPositionByMouse);

            lastClickPositionByMouse = currentMousePos;
        }
        if (Input.touchCount > 0)
        {
            for (int i = 0; i < Input.touchCount; i++)
            {
                var touch = Input.GetTouch(i);
                switch (touch.phase)
                {
                    case TouchPhase.Began:
                        lastTouchPositionByFinger.Add(touch.fingerId, touch.position);
                        break;
                    case TouchPhase.Moved:
                        if (PointIsOverUI(touch.position))
                        {
                            netSwipe += touch.position - lastTouchPositionByFinger[touch.fingerId];
                            lastTouchPositionByFinger[touch.fingerId] = touch.position;
                        }
                        break;
                    case TouchPhase.Ended:
                        lastTouchPositionByFinger.Remove(touch.fingerId);
                        break;
                    case TouchPhase.Canceled:
                        lastTouchPositionByFinger.Remove(touch.fingerId);
                        break;
                    default:
                        break;
                }
            }
        }
    }


    private static List<RaycastResult> tempRaycastResults = new List<RaycastResult>();
    public bool PointIsOverUI(Vector2 position)
    {

        var eventDataCurrentPosition = new PointerEventData(EventSystem.current);

        eventDataCurrentPosition.position = position;

        tempRaycastResults.Clear();

        EventSystem.current.RaycastAll(eventDataCurrentPosition, tempRaycastResults);
        foreach (var item in tempRaycastResults)
        {
            if (item.gameObject.GetInstanceID() == toCheck.GetInstanceID() && item.index == 0)
                return true;
        }
        return false;
    }
}

 

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Unity3d WebGL input mapping for Xbox controller

So, this weekend ludum dare 35 happened and I made my entry using xbox 360 controller as my intended imput source. Only one problem there: no existing input guide spoke of how to map buttons on the Xbox controller for unity3d webGL. And that was my target platform.

So, long story short, here’s what I wasted precious jam time looking for:

xbox controller unity3d webgl input map

xbox_360_controller-button-map

For xbox controller usually you would have the triggers on the back as a 3rd or 5th axis, instead on webGL they are mapped only as standard buttons.

As for the axis: left stick is X and Y axis (as usual), right stick is 3rd and 4th axis.

You can set them like this:

xbox controller unity axis input
xbox controller unity axis input

By the way, there’s a simple way to test these mappings on your own!
You can just use this script:

public class testbuttons : MonoBehaviour
{
    [SerializeField]
    Text t;
    // Update is called once per frame
    void Update()
    {
        t.text = "";
        for (int i = 0; i < 20; i++)
        {
            t.text += "Button " + i + "=" + Input.GetKey("joystick button " + i) + "| ";
        }
    }
}

Make an empty scene, add a UI>Text element and stick this script on it. Then run it and read what turns to true when any button is pressed on your controller.

As easy as pie… unless you are in a game jam frenzy, with severe sleep deprivation and panicking like I was… Want more tutorials like this? Join my newsletter! Want to tell me anything? Hit me up on twitter.

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Validating email in C#

Input sanitation is a must when dealing with web services, but it’s also smart to avoid to waste precious seconds in registration procedures due to a wrong email address. That’s true in web development, that’s true in mobile apps, that’s just true on any digital platform.
So it’s best to validate email in unity3d too before any web service is called.

The variables

First we get the references via unity editor to catch both theInputFieldandButton.

    [SerializeField]
    InputField mail;
    [SerializeField]
    Button sendButton;

Then we create aRegexusing the mail validation pattern from .Net framework (be aware, it’s not perfect, but it’s good enough).

    System.Text.RegularExpressions.Regex mailValidator = new System.Text.RegularExpressions.Regex(@"^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.?$");

TheRegexclass will provide us with a pattern-matching function that we can easily use to check if the pattern on which is built does or not match any input. The pattern will follow the regex syntax. Which by the way I found out not to be a regular expression language any more due to backreferences.

Validate email

Then, on eachUpdatewe first reset the button’s state, then we make it notinteractableagain only if any of the validation criteria is false. So only if the mail validation check DOES NOT recognize our string as a mail address it should make it notinteractable, but in the converse case it shouldn’t make itinteractablesince another validation criteria may fail independently.

    void Update()
    {
        sendButton.interactable = true;

        //other sanitation for other stuff

        if (!mailValidator.IsMatch(mail.text))
                sendButton.interactable = false;
    }

That’s all folks!

Really? really.
It’s not more than just 15 lines of code but it’s a big deal in reducing friction during your registration process for any service or game. If you like this kind of tutorial or just want to hear from me in the future, join my newsletter.

And here’s everything ready for copy-paste :

    [SerializeField]
    InputField mail;
    [SerializeField]
    Button sendButton;
    System.Text.RegularExpressions.Regex mailValidator= new System.Text.RegularExpressions.Regex(@"^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.?$");
    void Update()
    {
        sendButton.interactable = true;

        //other sanitation for other stuff

        if (!mailValidator.IsMatch(mail.text))
                sendButton.interactable = false;
    }
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •