Hi,
If need to do a little bit of sound editing and you hadn’t already discovered it, I’d like to recommend Audacity. I’ve used it to edit the sounds used in my apps, and I like it.
That’s all for now. Take care 🙂
Hi,
If need to do a little bit of sound editing and you hadn’t already discovered it, I’d like to recommend Audacity. I’ve used it to edit the sounds used in my apps, and I like it.
That’s all for now. Take care 🙂
I’ve wanted to add an AI for the 10.000 game, since giving users a single player option seems like a necessity, if I want to retain them. I for one was getting a little bit tired of playing both sides, if I couldn’t find someone else to play against.
The game before implementing the AI
Up until now the game has been built around the assumption that only users generated input for the game. I didn’t really take any measures to incorporate alternative input methods that an AI could use.
Drawing the architecture of the app you’ll get:
UI –> Model
Needless to say I’ll put a little bit of thought into the architecture, before punching in any lines of code, since there’s no simple way to integrate it in the existing architecture. As the model stands now, it has a set of methods that you can use to manipulate it’s state with, giving me some flexibility.
Examples of this are:
public boolean canRoll() public void performRoll() public boolean performLock(int diceNo)
Caveat: Expanding the app with an AI
I could write an AI class, that could manipulate the model, using the available methods. However, if I did that;
Using the Observer pattern (through the Java event model) enables me to publish events. With these events I can trigger the two classes and thus get them to do something, be it a calculation, a move etc. It doesn’t however limit them from performing actions, when any or every event is published. The Command pattern could probably be used in this regard, to give the model some uniform way of validating which actions are ok and which are not. That partly solves the problem, since it does not limit the AI would from performing a task and the UI would still enables user generated input.
The proposed changes so far
Updating the architecture with the AI gives me a diagram:
UI –> Model <– AI
It only states that the UI and AI should exist as separate objects and that they can independently make calls to the model. To ensure that only the allowed object can perform actions at a given time, the model needs to check who’s making the calls.
Caveat: Figuring out who invokes a method
The model could use Reflection to figure out who invoked a given method. Using Reflection does generate an overhead in resource consuption, compared to simpler solutions, and I’m not entirely convinced that it is secure. I base that assumption on this thread.
A simpler implementation would be to change all public method signatures to include a reference to the calling object as the first parameter. This must be filled out with a “this” reference, when any object tries to call a method, thus ensuring a check for legality of the performed call. The solution doesn’t gurantee a solution to the security issue 100%, since you could just pass a reference to another object, but I’ve decided there’s no need to bring out any bigger guns, when ther’s only a game of 10.000 at stake.
The aforementioned examples gets updated to:
public boolean canRoll(Object caller) public boolean performRoll(Object caller) public boolean performLock(Object caller, int diceNo)
Now we’re getting somewhere 🙂
Implementing the AI
Since the AI will be doing some computations and needs to be able to idle for a bit, so that the user the AI performed actions separately in the UI, it’s necessary to place this in a separate thread.
Caveat: AI and the UI thread
Implementing the AI (by extending the Thread class or implementing Runnable) doesn’t detach the AI from the UI. This means that when the AI goes into a wait state or does some computations, it’ll lock up the UI. This is a sure way of degrading the user experience and may even result in some ANRs. It should be obvious that you’ll want to avoid this.
The answer is using AsyncTask to detach the AI from the UI. The AsyncTask is part of Androids threading framework and solves the problem of locking your UI thread. An alternative could be using an IntentService, but with a litte careful programming I believe that an ASyncTask will solve my problem just fine.
One thing to be aware of though, is if your app gets reloaded, for instance if you rotate the screen, the AsyncTask will be killed. Program your solutions accordingly.
That pretty much wraps up my thoughts for how to expand the architecture in the 10.000 app. Be sure to check back for more information 🙂