We're searching a good workflow for our typescript project. Which approaches do you use?

Topics: General
Jan 18, 2014 at 12:15 PM
Hey!

We're playing around with typescript and we really love it! We want to rewrite an existing AngularJS app in typescript.

Right now we're searching a good workflow for our project. We want one file per class and the final js should be a single js file.

We wounder how to manage all the dependencies between the single files. There are a lot of people which suggest to have one reference.ts file which has all references. We adopted this particular strategy: http://www.youtube.com/watch?v=0-6vT7xgE4Y

This works fine but our laptops aren't very new and quite slow. So we have compilation times bigger than 15 seconds. This is quite anoying especially if we experiment with UI/UX things. To try some little chances we always have to way 15 seconds. This comes because with the reference.ts file, always all files are compiled again. Is there a workflow which only compiles the changed file and not all referenced files.

We don't use Visual Studio. We use Nodejs, Webstorm7 and Grunt.

Thanks for every advice and hint
Tschoartschi
Jan 18, 2014 at 6:38 PM
Edited Jan 18, 2014 at 6:47 PM
Roughly how much code are you compiling that takes that long?

I'm not really familiar with Webstorm, but your IDE might utilize incremental compilation for code completion and syntax highlighting as you edit, but then invoke a full build when you run the app.

In general, the old workflow of "make a tiny change and then reload the browser to see it" has gone out the window. It's also an entirely unproductive workflow. Batching changes into changesets before running and manually testing/examining is a mindset that is lacking in front end development. Can you imagine if backend applications were made via a single change at a time?

My tip is to heavily leverage a good web debugger like Chrome's WebKit inspector. It's an IDE in and of itself. Prototype CSS and markup changes in browser using the inspector before altering any code. You can even modify and save code in the inspector. In fact, you can modify code while it's running, and when you save Chrome will step back in time to before the modified portion so you can run the modified bits(!). Since TS and JS are idiomatic, copying prototypical changes back into source is easy.

Separation of concerns is important too. You're using Angular, so you're probably not writing much DOM handling code. Excellent! Leaning on data-binding and templating cuts down on code costs across the board. Avoid writing against the DOM like the plague.

Build times within a specific TS version are going to increase linearly as your codebase grows. That's the price for static analysis and type safety, and it's one that's worth paying, IMO. We now pay in time up front for avoiding certain classes of bugs, rather than finding them asynchronously. That's a good thing.
Jan 18, 2014 at 7:07 PM
Hello,

My current workflow:
  • only external modules - that makes dependencies simple as you don't need to use reference files at all and you have one class (or group of functions) per file. Dependencies are automatically resolved by tsc (at compile time) and AMD loader (at run time). It's just like imports in Java - you import something it it Just Works without worrying about ordering etc. For that to work you'll need AMD loader (e.g. require.js) but I used it anyway in my JavaScript projects. Honestly I can't imagine working on a big project (TS or JS) without require.js.
  • enable source maps generation and you'll be debugging real source code in browser.
  • code is minified using grunt (all depedencies to one file + minify it). You get one JS file (also minified HTML and CSS as part of grunt task). As an added bonus require.js minifier also outputs source maps so you can debug minified JS too (although currently you'll see JS instead of TS).
  • I wrap all "native" JS libraries in define() calls (for AMD loader) and add definition files (.d.ts) so they behave like regular TypeScript dependencies.
  • when I checked IDE support 2 months ago Eclipse had the best plugin (I've checked VS, Eclipse, IntelliJ and Sublime Text): https://github.com/palantir/eclipse-typescript it recompiles files as you save so you don't need grunt watch for that. It almost feels like early Java development :)
  • tslint in Eclipse ( https://github.com/palantir/eclipse-tslint ) and in grunt file, failing the build when it detects warnings.
I've built a sample project that shows most of those (external modules, require.js and minification): https://github.com/wiktor-k/ts-amd It's simple but you can get the idea of how it works.
Jan 19, 2014 at 10:34 AM
Thank you for your advices! It is true that we could use the chrome web dev tools more. But I don't agree totally with the idea to bundle many changes and then test/debug them. Jeah, but maybe we change our mind if we use the debugger more effectively.
I don't know why compilation takes that long, but I think it is a grunt issue. For more details see: https://github.com/basarat/grunt-ts/issues/7 The whole project has about 70 files and 4500 lines of code. Running the compiler on the command line takes about 5 seconds. (which is significantly faster than the grunt plugins. We also tried others than grunt-ts).

I'll definetly try the workflow which wiktork suggested and have a look into his repo. If I have more questions I'll come up here again ;)

PS: my laptop is more than 4 years old, so I understand, that compilation can not be blasting fast. (CPU: Intel Core i5 520M@2.4GHz with 8 GB RAM)
Jan 19, 2014 at 11:00 AM
I am honestly not being facetious: why are you using hopelessly inadequate hardware to do your jobs?

--
Mark Rendle
Founder & CEO
Oort Corporation
Makers of Zudio

Jan 19, 2014 at 12:01 PM
@tschoarschi I've got a project of over 50 files and 3700 lines of code (excluding js or .d.ts files) and the compilation in grunt also takes about 4 or 5 seconds but with the Eclipse plugin CTRL+S compiles so fast it almost feels like JavaScript development - CTRL+S, switch to browser, F5, viola! I won't say it's a miraculous silver bullet but it's worth to try out.

I've had issues that the plugin produced empty file but then you can grunt typescript entire project and it works.

The repo I provided has minimal setup so you can check whether it suits you, there are several things to optimize there (in grunt build step).
Jan 19, 2014 at 3:28 PM
wiktork wrote:
Honestly I can't imagine working on a big project (TS or JS) without require.js.
_
We have a Visual Studio-based project structure with the following statistics
  • 25 separate projects
  • 450 TypeScript files (includes about a hundred .d.ts files)
  • 325 exported classes + 50 private classes
  • Some 40K lines of code
We do not use external modules (i.e. we do not use Require JS), instead we use the internal <reference> mechanism.

Some features of the structure and workflow
  • One class per file.
  • All classes contained within a hierarchical namespace structure originating from a single property on Window (e.g
window.rootNamespace : { 
    namespaceOne: { ClassOne: ... }, 
    namespaceTwo: { ClassTwo: ...  } 
}
  • No source maps (because the compiled JavaScript is eminently readable).
  • In debug mode every single JavaScript file is served up individually to the browser.
  • In release mode all scripts are bundled and minified into a single JavaScript file.
  • The Visual Studio solution can scale by adding new projects and configuring them with the bundling mechanism
  • There is no extra maintenance when a new TypeScript file is added to a Visual Studio project.
  • The project is organised into a set of library classes and client classes.
  • Library classes may depend on other library classes but may not depend on client classes (obviously)
  • Client classes may depend on each other and the library classes.
  • Each file compiles on save, a project can be compiled individually or the entire solution can be compiled.
We have had no complaints, with regard to scalability, compilation, debugging, maintenance - in general the coding, debugging, deployment workflow functions well and speedily at that.

I would be interested to learn if anyone is able to suggest an improvement to this set-up with AMD or anything else.
Jan 19, 2014 at 3:44 PM
Hi nabog,

Can you elaborate on this point "In debug mode every single JavaScript file is served up individually to the browser."? How do you serve those files?
Do you by "debug mode" also mean development?
How do your <script> tags look in HTML?

I'd like to compare your workflow with mine.
Honestly I can't imagine working on a big project (TS or JS) without require.js.
Yeah, that can also be explained by limited imagination :)
Jan 19, 2014 at 4:10 PM
Hi, wiktork,

The files are bundled and served up by the ASP.Net Bundling and Mnification mechanism.

The file containing the script tag is a ".cshtml" file that is processed by the ASP.Net pipeline and converted into a corresponding HTML file. The directive for generating the scripts is a one liner:
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.js"></script>
   
<!-- Custom scripts -->
@Scripts.Render("~/FooScripts")
In this snippet we have a regular script tag for JQuery say and a directive to the ASP.Net bundler to load the bundle "FooScripts". The bundles are configured in C# code, which you can read about in the link above.

The switch between generating script tags for each individual file versus a single script tag is controlled in the web config:
<system.web>
    <compilation debug="true" />
</system.web>
In debug (yes, development) mode the flag is "true", while in a release build this is automatically converted to "true".

This may sound complicated, but it's one of those configure-once-and-forget procedures.

I believe the actual bundler lives in the .Net System.Web.Optimisation.dll and may have an open source equivalent.
Jan 19, 2014 at 5:17 PM
Okay, now I understand that.

It seems your workflow is tied to ASP.NET so probably it will suit you better than AMD + require.js.

I'll take your list of features and compare with mine:
  • One class per file. - I've got the same although some files are just plain modules with directly exported functions
  • All classes contained within a hierarchical namespace structure originating from a single property on Window (e.g
    window.rootNamespace : {
    namespaceOne: { ClassOne: ... },
    namespaceTwo: { ClassTwo: ... }
    }
    - During development I've got only require.js functions exposed ("require" and "define") and in production build there are no symbols exported in window (see below for explanation).
  • No source maps (because the compiled JavaScript is eminently readable). - I've got source maps but yes, the JS code is readable as it is + I've got source maps for the minified JS file (for debugging in production, sometimes they are very handy).
  • In debug mode every single JavaScript file is served up individually to the browser. - in development I have only one <script> tag to load require.js and point to the main module. All other dependencies are automatically transitively inferred and loaded in correct order.
  • In release mode all scripts are bundled and minified into a single JavaScript file. - the same
  • There is no extra maintenance when a new TypeScript file is added to a Visual Studio project. - the same
  • The project is organised into a set of library classes and client classes. - my project is small enough so that the library client classes are just subdirectories
  • Each file compiles on save, a project can be compiled individually or the entire solution can be compiled. - the same. I've got IDE that compiles on save and Grunt task that lints everything then compiles everything, concatenates, minifies JS, CSS, HTML and gzip compresses for faster serving static resources.
The main difference to your workflow is that with require.js you don't need server side processing at all during development and you can just have a "dumb" HTTP server that just serves plain files. (plus IDE compiling ts->js).

tsc --module amd emits special define calls that are used by require.js to asynchronously load dependencies before any component that depends on them runs. The define calls also make sure that no symbols leak to global object so you don't need to organize namespaces using JS objects (but rather file and folder structure). require.js is also smart enough to load only those dependencies that you really use in your application. The minifier is just a loader that puts all used dependencies in one file and minifies them.

The dependency information can be used by different tools to produce for example dependency graphs: https://github.com/pahen/madge#examples

See my other thread for how does the tsc output looks like when using module amd: https://typescript.codeplex.com/discussions/510900
And the "Why AMD" is also interesting: http://requirejs.org/docs/whyamd.html#amd
This may sound complicated, but it's one of those configure-once-and-forget procedures.
Haha, I could say the same about require.js + external modules :)

I have an extremely sample application that demonstrates this approach: https://github.com/wiktor-k/ts-amd You'll need npm and grunt ( http://gruntjs.com/ ) to do the production build but for development you just need to compile the ts files (tsc.cmd or VS). Then it just works (assuming you're serving the files through http not locally).

And I wouldn't necessarily consider it an "improvement" over yours. They're just different mostly because we have different needs and constraints. But it certainly never hurts knowing other ways of doing things :)
Jan 19, 2014 at 8:08 PM
Regarding the server-side processing during development, in practise the ASP.Net code is pretty efficient and processes and serves up the 300+ files in less than a second. So the workflow of "Compile-on-save + F5 on the browser" works instantly. But I agree it would be nice to have only a dependency on a "dumb" HTTP server.

(BTW: ASP.Net is another open source project by Microsoft.)

The main differences (assuming a one class per file approach) as I see them are:
  • With the Require JS approach outlined above, all code needs to be organised under one root directory, because as you say "I have only one <script> tag to load require.js and point to the main module". Not sure how it's possible to compile a project individually with this set-up - unless there is some scripting being done. With Visual Studio and bundling one is free to have projects anywhere on the network, since they can be referenced by the solution. The ASP.Net bundler also works on CSS files and a linter can be run as a post build step if necessary.
  • With AMD the browser needs to make an additional HTTP request for every new class that needs to be loaded. With the bundled approach there is only ever one request per script bundle. I believe there is a Require JS optimiser that bundles together classes, but that seems to suggest extra maintenance and we lose the "load only when required" advantage.
  • While Require JS does load the dependencies for each class it is necessary to explicitly specify that dependency via import foo = require("./mydependency");. This generates an additional closure: define(["require", "exports", "./mydependency" per class. This creates additional maintenance and also extract code that needs to be transmitted over the wire. With the <reference> approach we instead inform the class "here is an interface, just assume that a type that implements this interface will be available at runtime".
I believe these factors will have a significant impact when one is considering 300+ classes.

I do agree that each approach suits a different need and set of constraints - so, yes, it's useful to know exactly which bits one is losing or gaining by picking a particular approach.
Jan 19, 2014 at 9:26 PM
I see, so the @Scripts.Render("~/FooScripts") concatenates all scripts and inserts one <script> tag? How does it know the correct order of concatenation? Builder configuration? references files?
With the Require JS approach outlined above, all code needs to be organised under one root directory
What do you mean by "one root directory"? Do you mean the fact that I'm putting all my scripts under "scripts" directory? You can access modules up in the folder hierarchy using relative imports like "../../lib/Promise". And of course all ts files are not in one big flat directory :)
With AMD the browser needs to make an additional HTTP request for every new class that needs to be loaded
Yes that's true. At development one script per class is loaded. But loading from localhost is not THAT slow. One big file sounds faster though :)
With the bundled approach there is only ever one request per script bundle.
As you're getting that during development time I think it's nice. But how does debugging and code browsing feel in one huge file?
I believe there is a Require JS optimiser that bundles together classes, but that seems to suggest extra maintenance and we lose the "load only when required" advantage.
Yes optimiser will do that (bundle together classes) and more (can also act as a preprocessor or inline text templates or conditionally remove code fragments (like [Conditional["DEBUG"]) etc.). I think that's what your bundler does - builds production site.

You don't loose "load only when required" - during optimization it will take only those dependencies that are really used by your main module. So if you've got library with 3 classes and project A uses only one class from it r.js optimizer will take only that one.
This generates an additional closure
Minification + gzipping will reduce the size of file - especially when it's repeated over and over. Sure there is a runtime cost but for me it's like micro-optimization (like... do you avoid extending other classes because that would mean longer prototype chain and slower method calls?). Or maybe you're targeting older browsers - that's a different constraint. I'm personally waiting for browsers to implement it natively: http://wiki.ecmascript.org/doku.php?id=harmony:modules But for now I get easy development and fast runtime in modern browsers.
This creates additional maintenance
What do you mean by maintenance in this context? I don't count code autogenerated by tsc as one that needs maintaining.
I believe these factors will have a significant impact when one is considering 300+ classes.
When I hit that mark I'll tell you what the difference is as I plan to measure it :)
I do agree that each approach suits a different need and set of constraints - so, yes, it's useful to know exactly which bits one is losing or gaining by picking a particular approach.
Yes, each choice is a trade-off. I wanted to build a project that's completely backend agnostic (servers and technologies) and uses JavaScript for what it can and some more.

Well if you think about it it's great that the TypeScript team could build a tool that can serve so many different needs. I usually very carefully select my dependencies (tools, frameworks) but TS brings so much value that now I can't live without it! :)
Jan 20, 2014 at 9:59 AM
So the @Scripts.Render("~/FooScripts") concatenates all scripts and inserts one <script> tag?
Only in production. In development same as your workflow: one script tag per file. So yes, in development one HTTP request per file. How does Require JS work in production to avoid the multiple HTTP requests?
How does it know the correct order of concatenation? Builder configuration? references files?
There is a builder configuration. The organisation of the projects into library and client ensures the configuration is minimal. However, this is the primary advantage of Require JS over my method.
What do you mean by "one root directory"?
AMD
  • Root
    • Lib
      File1.ts
    • Client
      File2.ts
My method
  • Lib
    File1.ts
  • F:\somepath\Client
    File2.ts
But how does debugging and code browsing feel in one huge file?
This is a misunderstanding. See the first answer.
What do you mean by maintenance in this context?
The fact that with AMD every class that has a dependency on class Y must have an import statement referencing class Y. If class Y needs to move or be renamed then all import statements need to be modified. This is the maintenance that I refer to. With the <reference> approach we only deal with ".d.ts" and the reference is added only once per project.

E.g:
  • Lib
    _exports.d.ts // Exported types
    File1.ts
    ...
    FileN.ts
  • Client
    _references.d.ts // Contains <reference path="../Lib/_exports.d.ts" />
    File2.ts
From the discussion it's a bit clearer that
  • The Visual Studio-based approach has a further unit of compilation that is not explicitly spelled out in your approach: that of a "Project", which is a collection of files. So, while it is necessary to configure the dependencies, it doesn't need to be done at the individual file level, rather it's done at the project level. So one may say "This is my Library project, I will list it first in my bundling configuration in order to ensure those are the first <script> tags".
  • With AMD there is no need to worry about dependencies, so long as they are specified correctly via import statements. However, for our requirements this is too granular.
Jan 20, 2014 at 10:44 AM
Edited Jan 20, 2014 at 11:00 AM
How does Require JS work in production to avoid the multiple HTTP requests?
The same way Script.Render does - by concatenating and minifying everything to one script file. But when you switch flag debug to false (or create Release build) I execute from command line "grunt build" and get optimized site in a "dist" subdirectory.

Well, why talk about abstract things - you probably saw this micro-project: https://github.com/wiktor-k/ts-amd
I checked it out at https://metacode.biz/sandbox/ts-amd/ - no further modifications just git clone and compile ts (grunt).
After running "grunt build" in the same directory I get the production build: https://metacode.biz/sandbox/ts-amd/dist/

Fiddling with developer tools will show you how it's structured.
What do you mean by maintenance in this context?
The fact that with AMD every class that has a dependency on class Y must have an import statement referencing class Y. If class Y needs to move or be renamed then all import statements need to be modified.
Yes that is a weak point of my approach although I like that the dependencies are clearly specified and just glancing at the file header I can see how coupled it is with the rest of the system (I think it's called confirmation bias :) )

If you move a class to a different module you still need to update references though? Or does refactoring do that for you?

The _exports.d.ts and _references.d.ts sounds like a clean solution dividing code into projects. Do you know if it's a thing that VS plugin provides or could it be used from command line "tsc" tool? A quick google search didn't yield any interesting articles of how to set this up.
Jan 20, 2014 at 12:26 PM
The "project" is a Visual Studio thing. It is a way of organising a collection of code files that logically belong together. This works for C#, VB etc. and now there is a project type for TypeScript as well.

In the established C#/VB projects, Project A can add a reference to Project B in order to import all public types contained within that project.

TypeScript projects at the moment do not permit referencing other projects. So the _exports/_references mechanism is something we've come up with to deal with it.

It looks a bit like this

ProjectA
_exports.ts // <reference path="One.ts" /> <reference path="Two.ts" />
One.ts
Two.ts
Three.ts // This is not exported

ProjectB
_references.ts // <reference path="../ProjectA/_exports.d.ts" />
// Now has access to classes One, Two

If I were to rename class Two.ts to FooBar.ts (also renaming the TypeScript class defined within) then I only need to update ProjectA/_exports.ts.

When ProjectB is compiled I would get an error that "class Two does not exist". The thing to note is that it is a compilation error - not a reference error.

Of course the way to avoid the compilation error is to ensure we use interfaces rather than concrete classes:

ProjectInterfaces
_exports.d.ts
MyInterfaces.d.ts // Defines interface FooBar

ProjectA
_references.ts // <reference path="../ProjectInterfaces/_exports.d.ts" />
One.ts
FooBar.ts // Implements interface FooBar
Three.ts

ProjectB
_references.ts // <reference path="../ProjectInterfaces/_exports.d.ts" />
// Only knows about interface FooBar, doesn't care who implements it

Will take a look at your project as well. Thanks!
Jan 20, 2014 at 12:43 PM
Okay now I understand it.

I also saw that the _references.ts file is part of tsc: http://blogs.msdn.com/b/typescript/archive/2013/12/05/announcing-typescript-0-9-5.aspx that may come in handy one day.
When ProjectB is compiled I would get an error that "class Two does not exist". The thing to note is that it is a compilation error - not a reference error.
I don't know if I stated it but a missing external module is also a compile error (as tsc uses external modules to get type information):

test.ts:
import z = require('z');
> tsc --module amd test.ts
test.ts(1,1): error TS2071: Unable to resolve external module ''z''.
Thanks for taking your time explaining your setup!

Reading this thread one can have enough "workflows" to choose from ;)
Apr 1, 2014 at 11:14 AM
FYI grunt-ts is no slower than raw tsc at the moment : https://github.com/grunt-ts/grunt-ts/issues/7#issuecomment-36444636
Apr 2, 2014 at 6:19 AM
Hi all,

Wiktork and Nabog - fantastically useful discussion there, it's really helping me understand the issues. As a non-windows developer I'm probably going down Wiktork's route of grunt/require.js, but it was great to see the explanation of things to think about.

I was wondering:
  • In both cases you've referred to large projects. Do you use an existing MVC framework for them - like Backbone/Marionette or Angular? If so, how have they coped with your workflows? If not, did you build your own or go for a completely different approach?
Thanks,
Alastair
Apr 3, 2014 at 12:04 PM
Hi Alastair,

When I started working on my project about a year ago I considered Angular but the definition files from DefinitelyTyped were poor. Every day I had to fix them or add more special cases and that was when I was still learning Angular so aligning my code to the framework was taking me too much time.

In the end I decided to write a small model/view layer in TS similar to Backbone but lightweight and strongly-typed and that worked well because of my application scope (small number of views and models). Most of my application's code is framework agnostic (the application can work completely offline and synchronizes data with backend when you're online).

The UI looks like that: https://metacode.biz/sandbox/issues-4.png you can judge if that's complex or not (although you don't see that the items are contenteditable and most of the backend-in-frontend synchronization logic :) ). For me the 4 KLOC of TS is not that large - I think nabog's project is far more bigger and ... "enterprise" :)

I think now the definition files are more accurate so in your case I'd make a prototype to check that and to feel whether the framework feels natural. Oh, and if you're going with the AMD path I'd consider making everything an AMD dependency and dropping the /// references syntax at all. See pure.d.ts and pure.js here: https://github.com/wiktor-k/ts-amd/tree/master/scripts/lib It works well for any non-TS code that I have.
Apr 4, 2014 at 9:03 AM
Hi Wiktork,

Thanks for that. I'm still not decided on the AMD issue but I am indeed working through a prototype.

So far, it's not clear to me why I should use AMD/require instead of just references? Given that I grunt everything into a single app.js file, which so far that seems to be pretty quick, and that (as you mentioned) the definition files are now very comprehensive. What advantage do you see?

I'm keen to use a third-party, well-established MVC if possible. We have an existing home-grown javascript MVC based loosely on backbone, but I'd like to move to something where lots of other people have done the wheel-inventing for me. I'm looking at Backbone/Marionette at the moment and working back through the "Gentle introduction to Marionette" guide by Davic Sulc (https://leanpub.com/marionette-gentle-introduction), but with typescript instead.

It's working ok and typescript definitely makes for a smoother and more reliable experience. But it's clear that as typescript gains in popularity there's scope for a more strongly-typed version, taking advantage of the compiler to reduce the number of "magic strings". I wouldn't be at all surprised if we see a "typescript MVC" in the near future.

cheers,
Alastair
Apr 4, 2014 at 5:52 PM
Hi,

Well there are two approaches to modules in TypeScript - you go with external modules and AMD or you go with internal modules and references. You don't want to mix them because it starts to become messy really quickly. I use AMD as I was already using require.js and plugins (text plugin, css plugin) when I worked on JavaScript projects. Then I concatenate the files into one only during production build (not development) using grunt: https://github.com/wiktor-k/ts-amd/blob/master/Gruntfile.coffee . Nabog uses internal modules and it works fine too. I like external modules/AMD as they look similar to Java, they look like modules proposed by ES6 (internal modules have been deprecated from ES6 but TS will support them) and... well... they look cleaner to me - import is part of the syntax but /// <references> looks like an ugly hack to me.

But as you can see from this thread it's more a matter of style than a clearly superior approach (like using spaces instead of tabs).

Using third-party MVC is a smart choice if you have a development team and want to concentrate on writing a product instead of framework and THEN a product :)

Yeah, "magic strings" is what annoys me too. See events (DOM events or Backbone events). For my project I wrote a simple class more similar to events used in Chrome extensions and that worked better with static types (one object per event type like onChange instead of functions accepting magic strings like addEventListener). But that's a topic on designing TS libraries and not workflow for typescript projects :)

Wiktor
Apr 8, 2014 at 1:25 PM
Just adding to @wiktork's explanations above...

Firstly we are talking about client (Browser) code. For server-side code (NodeJS), one would always use external (CommonJS) modules.

The primary deciding factor between external and internal modules on the browser is to do with the management of the Global Scope. The internal modules approach starts off by partitioning the Global Scope into "namespaces", where each namespace is a container for related types. With the external modules approach the global scope is inaccessible.

Here is an example of the two approaches:

External Modules
/* globals.ts */
export enum Colour {
    red,
    blue
}

/* foo.ts*/
// Error: Could not find symbol Colour
var colour = Colour.blue;

// Okay
import enums = require('<path>/globals');
var colour = enums.Colour.blue;
Internal Modules
/* globals.ts */
module enums {
    export enum Colour {
        red,
        blue
    }
}

/* foo.ts */
/// <reference path='<path>/globals.d.ts' />
var colour = enums.Colour.blue; // okay
Each approach has its advantages.

External modules Pros
  • Cleaner conceptual model. Every type used in a specific file must be imported; no access to the global scope.
  • Language support for the import statement.
External modules Cons
  • Import hell. Every type used in a specific file must be imported. This conflicts with the objectives of modularisation and one class per file. See workitem #2212 for a description of this problem.
  • Higher maintenance/refactoring costs. Changing the file name of an exported module would require having to edit every importing file.
Internal modules Pros
  • More scalable. Types can be added to the global namespace and made available throughout the code-base.
  • Less maintenance/refactoring costs. See my example above on why this is true.
Internal modules Cons
  • The <reference> mechanism is still in a state of flux. No native language support has been proposed.
  • Requires careful management of the global scope. Namespaces and types can be overwritten, resulting in runtime errors.
How internal modules can be improved

We would require language support in order to ensure internal modules enjoy the same "clean conceptual model" as external modules:
/* globals.ts */
module enums {
    export enum Colour {
        red,
        blue
    }
}
Object.freeze(window.enums); // Ensure namespaces are read-only

/* foo.ts*/
/// <reference path='<path>/globals.ts' />
var colour = enums.Colour.blue; // Should be error

// New keyword "import namespace"
import namespace enums;

var colour = Colour.blue; // Now okay
In summary:
  • The discussion of external vs internal modules is relevant only for client code.
  • There is a lot of room for improving support for internal modules.
  • Go with external modules if you have a small or medium sized project, or you do not have good control over the global scope (lots of externally loaded scripts).
  • Go with internal modules for large projects and you have a good handle on the global scope.
Related discussions:

TypeScript namespacing and modules
Is module confusion holding back TypeScript?

Also refer to the official documentation for a basic introduction.
Apr 8, 2014 at 9:59 PM
Edited Apr 8, 2014 at 9:59 PM
Note: Object.freeze is a bad idea if added by default. It makes property access slower because of the added security.

http://jsperf.com/freeze-vs-seal-vs-normal/3

Some 2D game engines I know of stopped doing this awhile ago to gain speed. If freeze was to be added, it would need to be optional - perhaps using a "sealed/frozen" modifier.

In regards to workflow, also consider source control. I usually set '.gitignore' to exclude all .ts generated files (.js, .min.js, *.map) under a single folder nested in the main web project folder. This helps me keep the .js files in parent directories/folders. This I guess is more useful for those using multiple .js files instead of compiling into one big one, but thought I'd mention it anyhow.
Apr 9, 2014 at 10:35 AM
@jamesnw,

Object.freeze cannot be added automatically because the compiler will never know if all the types for a specific namespace have been declared. That is something that will need to be added by the developer - perhaps in a script footer.

The performance of Object.freeze is largely due to a bug in Chrome.

In Visual Studio there is no need to worry about generated files making their way to source control, because (since TS version 0.9) generated files are not included in the project, and, because the subversion, git, or TFS clients work off the project configuration, these are automatically excluded from source control.
Apr 9, 2014 at 2:38 PM
Yes, I'm aware of the chrome bug, but it still is current;y slower in many other browsers as well (though I admit not by much). Unfortunately, people don't always update their browsers to latest ones (especially large development firms with strict policies).

tschoartschi mentioned he doesn't use Visual Studio. Also, I'm afraid your statement isn't true (especially for those who don't use VS). You are right with TFS, but not Git. I have GIT setup on many projects, and if you don't add files to .gitignore, then ALL files (in project or not) are picked up. In fact , I use the "Microsoft Git Provider" extension, and had the same issues. All other extensions I've tried do the same thing. In fact, you can install the Git GUI and see yourself that all .js files are picked up (Git doesn't care about VS). This is not TS related, but Git related.
Apr 9, 2014 at 3:23 PM
Okay, yes, a minor point, git is a special case.

As people move on to 100% TypeScript then the .gitignore shouldn't be that complicated an issue.
Apr 9, 2014 at 3:32 PM
100% TypeScript? :) Not sure what that means, but any web project would most likely never be 100% TypeScript because of .js files such as jquery, etc., that are used by many projects, and are only .js files (possibly with .d.ts files). That said, I'm with you - the internet should all move to TS only. ;)
Apr 16, 2014 at 8:32 PM
This is a good discussion and I'm intrigued by some of the approaches here.

I started a TS project awhile ago (a game) and I also use multiple VS TS projects and a Node.js project. So I learned the hard way I was supposed to start out with the AMD module approach... so I had part of a day when I switched all my internal modules from my other projects into AMD modules.

Now, however, the only pain point I have is that my Shared TS project must copy its files into any dependent projects:
  • Shared (TS)
  • Engine (TS)
  • Server (TS)
  • Node (Node.js)
Since my server contains both server-side and browser-side logic, Engine runs in browser but uses types from Shared. Server (TS) runs in Node but requires types from Shared.

Shared must be referenced by Engine and Server, as I share interfaces and utility methods. In order to do this with minimal pain, I have a Build Event on Shared that copies .js and .ts over to a Engine\Shared directory, Server\Shared and Node\Server\Shared directory (effectively).

My Engine and Server TS projects are configured to output to ..\Node\public\javascripts (Engine) and ..\Node\Server (Server) respectively, so I don't need build events for those.

Then in my Engine.ts (for example), I can do this:
import Models = require('./Shared/Models');
where "Models.ts" is copied via build event from Shared TS project.

This works but I'm not super enthused about manual copy commands. Does anyone have an approach with AMD/CommonJS module approach that would work without that?
Apr 17, 2014 at 8:22 AM
@kayub correct me if I'm wrong but you want to use the same files in both projects without copying anything but you're using --module amd for the browser and --module commonjs for node?

If so, with a little bit of magic you can use TS files compiled with --module amd in node using this package: https://www.npmjs.org/package/amd-require

You just add a line in your node.js code like that: require('amd-require').baseUrl = '.'; and then define() and require() calls from AMD just work.

Or you can use browserify to do the reverse - get commonjs modules working in the browser. I think it would require an additional build step but I'm not sure as I haven't used this approach.
Apr 23, 2014 at 3:14 PM
The main problem from our projects perspective is that when using
  • internal modules
  • one class per file -approach and
  • outputting to single file with 'tsc --out'
you have to maintain '/// references'-declarations in way or another by hand to get single file output correctly generated.

What do you think, would it make sense if the compiler itself produced single file output with all necessary orderings already performed? It certainly has all the information available.

(In other words, explain to me why on earth is this not already happening. Coming from any enterprise-y background, Java for example, this is the least you could expect from the compiler. And the typescript staff seems to wipe this issue under the rug every time. (#1590))
Apr 23, 2014 at 6:02 PM
PSP2 - you're definitely correct. A major shortcoming with TypeScript today is that it's far more difficult than it should be to follow the one class per file approach.

Very simple projects (demos or tests) don't run into this problem because they can live all in one file, and "application scale" projects don't either since the time spent to implement the work-arounds is small compared to their entire scope. It's the developers working on small to medium real-world projects that feel this pain the most.

I'm hopeful that with TypeScript 1.0 is out the door that we'll see some significant improvements here.
Apr 24, 2014 at 7:33 AM
For reference, this is how palantir is going about their typescript modules: https://github.com/palantir/eclipse-typescript/issues/128
May 1, 2014 at 8:04 PM
Edited May 1, 2014 at 8:05 PM
wiktork wrote:
@kayub correct me if I'm wrong but you want to use the same files in both projects without copying anything but you're using --module amd for the browser and --module commonjs for node?

If so, with a little bit of magic you can use TS files compiled with --module amd in node using this package: https://www.npmjs.org/package/amd-require

You just add a line in your node.js code like that: require('amd-require').baseUrl = '.'; and then define() and require() calls from AMD just work.

Or you can use browserify to do the reverse - get commonjs modules working in the browser. I think it would require an additional build step but I'm not sure as I haven't used this approach.
I checked and am using CommonJS across the board.

No, the issue is at compile time in VS, I need the references in the appropriate folders.

For example, in Server\Models.ts I want to go and get some common interfaces:
import Shared = require('.\Shared\Models.ts');
For this to work, that TS file must be there at compile time. In order to get it there, I use a post-build event on the Shared project to copy into the destination directories.

On a related note, I wish TS supported the module folders... so I could just require a folder with a bunch of TS files instead of individual ones.
May 16, 2014 at 9:55 AM
For what its worth, I started our enterprise-scale project using Internal modules (as detailed above by @nabog), and have just recently switched over to External modules using RequireJS. I'm quite happy with the result, and am fairly certain this will be much more maintainable as the project grows.

The main issue we had with Internal modules was the ordered loading of scripts - if you got one script loaded in the wrong order, your app blows up at runtime. We use the excellent Grunt-TS to compile, which generates a useful reference.ts file which you can tweak to specify the order of your TS scripts. But, you have to get the order right, and it doesn't concern itself with any external JS libs that also need to be loaded first and in a specific order. When your project starts getting over 100+ TS scripts, and you create a class that has several dependencies and is also a dependency for other classes, you have to be sure that its loaded in a very precise order. When you have even a few people working on a project, this isn't sustainable.

It's also worth noting that using ASP.NET script bundling would still expose the same issue - you need to ensure the correct load order, as ASP.NET doesn't know about any script dependencies. Although we have .NET services, we chose not to use ASP.NET anyway, as its an unnecessary dependency for a html/js app.

So we switched to External AMD modules and RequireJS. Yes, you have to have imports at the top of each file - but this isn't much different to namespace/package imports in C#/Java, and makes it very clear what the dependencies are for a class. And, if you find you're amassing a huge list in a file, then it's a good hint that your class is perhaps trying to do too much and time to refactor. Once you've set up your RequireJS config with any specific paths and shims for non-AMD libs, you're pretty much good to go.

Anyway, it's early days, but this is certainly proving to be a much more sustainable workflow for enterprise apps.
May 16, 2014 at 10:09 AM
@MarcusWhit just a heads up grunt-ts can help you with external modules as well.

If you do ///ts:import=commandServices in your file, grunt ts will generate something like:
///ts:import=commandServices
import commandServices = require('../../../types/commandServices'); ///ts:import:generated
before it compiles the typescript file. The good news is that a.) you don't care about relative paths. and b.) The path is updated if you move the file around in your project.

Also supported are these transforms:

///ts:export=someThing that does an import export
///ts:ref=someThing that does a tranditional ///<reference import.

Design notes : https://github.com/grunt-ts/grunt-ts/issues/85
May 16, 2014 at 10:19 AM
@basarat - that's gold. We were just complaining yesterday of all the string paths and issues created if you want to move files around! Thanks.
May 16, 2014 at 4:04 PM
MarcusWhit wrote:
Yes, you have to have imports at the top of each file - but this isn't much different to namespace/package imports in C#/Java, and makes it very clear what the dependencies are for a class.



@MarcusWhit, TypeScript imports are not the same as the C# using directives when following the one class per file approach. (They are only equivalent if, in the TypeScript case, all the types in a namespace are declared in one file which is then imported.)

Yes, if you have very complicated dependencies between classes then RequireJS would be the right solution.

It might also be worth asking "why do I have such complicated dependencies?"
May 16, 2014 at 5:03 PM

I didn't say they're the same, I just suggested it's similar to what most are already used to in C#/Java. Some don't want to have several require statements in each file, but it's really a non issue when you consider this.

In my experience, the dependencies don't need to be especially complicated to make listing them all in a perfect order somewhat painful. Each class may only have a few dependencies, but if you have a large enterprise application with 100's or 1000's of classes, the dependency tree is going to be difficult to manually manage in one central list. In a small app it's unlikely to be an issue.

Each to their own though.

May 16, 2014 at 6:43 PM
Edited May 16, 2014 at 6:46 PM
thanks, this was explained wonderfully. I really like your idea of the "import namespace" keywords. Can't believe that I've spent the better part of a day and half researching modules and how to correctly organize code!! (and I'm still confused as hell!) I barely gave this two minutes of thought in the .Net world! Sorry just had to vent, thanks again.
May 18, 2014 at 2:24 PM
@MarcusWhit, there is something that you are doing wrong here. I am not at all sure what the "dependency tree" is that you have to manage.

We have well in excess of the 100 TypeScript files that you have, and less than five script dependencies to manage.

@FreddyV, glad the discussion was useful to you. To be honest we learnt a few things from this as well, in particular @wiktork's comment about having a dependency only on a "dumb HTTP server" is a good one.