Showing posts with label howto. Show all posts
Showing posts with label howto. Show all posts

Restricting CCK forms in Drupal 6

This article contains a programming howto for stripping unwanted fields from forms generated by the CCK module in Drupal 6. It is natural for a content type to gather as many fields as the logical data schema requires (essentially, the decision about adding a field to an existing content type or creating a new content type, is dictated by considerations about data cardinality and independence in life cycles of individual pieces of data). Real-world applications will usually require that certain subsets of these fields should not be editable by everyone, or that they should only be editable at certain times or when certain conditions are met (for example, based on another field's value within the node being edited). Understandably, CCK doesn't know anything about such special needs out-of-the-box, and so it will render all node fields as equals on a common form.

Consider a case where you have already created a complex default form for editing the entire content type, possibly with field groups and a custom theme which renders them in a fancy hierarchical way. Now let's say you have another user role which is only supposed to understand and edit a few fields from the said content type. You want a new form which displays only those fields and saves the submitted values to the node, validating them in exactly the same way as they would be in the full-node edit scenario. To make things even more interesting, suppose that one of the fields is a select box for which you want to restrict the list of allowed values when it is presented on the restricted form.

The correct solution is short in code, but rather tricky conceptually. It requires at least an intermediate level of understanding of the Drupal Form API and theming layer. The following steps can be used as a guideline:

  1. Choose a path for the new form. It is a logical choice to extend the default node edit path. That is, let /node/1234/edit refer to the default form and let /node/1234/edit/custom refer to your custom form. In your module, implement hook_menu() and copy the item from node.module which refers to /node/%node/edit. At this point you have an opportunity to turn it into a MENU_CALLBACK (if you don't want the new form to be accessible through the standard tabs) and also to restrict access by defining a new access callback. Don't forget to specify 'file path' => drupal_get_path('module', 'node'), otherwise the file node.pages.inc defined by node.module would not be found. The new item should look like so:
      $items['node/%node/edit/custom'] = array(
        'title' => 'Edit custom',
        'page callback' => 'node_page_edit',
        'page arguments' => array(1),
        'access callback' => 'yourmodule_edit_custom_access',
        'weight' => 1,
        'file' => 'node.pages.inc',
        'file path' => drupal_get_path('module', 'node'),
        'type' => MENU_CALLBACK,
      );
    
  2. Implement hook_form_alter() to disable access to unwanted fields on your custom form. You will also probably want to set a distinct theme function for the new form, so that you can arrange the wanted fields in a sensible way. The code in yourmodule_form_alter should look like so:
      if ($form_id == 'yourcontenttype_node_form' && arg(3) == 'custom') {
        $form['#theme'] = 'yourmodule_custom_form';
        $allowed_fields = array('field_first', 'field_second', 'field_third');
        foreach ($form as $key => &$field) {
          if (strpos($key, 'field_') === 0 && !in_array($key, $allowed_fields))
            $field['#access'] = FALSE;
        }
      }
    
    The original form might also contain other form elements that need to be disabled and whose names don't begin with field_. You can find out what they are visually by looking at the form and by dumping the keys of $form. One good example is the buttons key, which contains the submit, preview and delete buttons. It is possible that some form elements contributed by other modules are not yet present when your hook_form_alter() is called. In this case, you will likely have to implement another hook_form_alter() in another module and set this module's weight to a big value in the system table, to ensure that it executes last.
  3. Register the theme function yourcustom_form specified above via hook_theme() and also provide an actual implementation in a function called theme_yourmodule_custom_form, which receives $form as argument. Alternatively, you could declare it in hook_theme() as a template like so:
      'yourmodule_custom_form' => array(
        'arguments' => array('form' => NULL, 'user' => NULL),
        'template' => 'custom_form',
      ),
    
    and then create a template file custom_form.tpl.php in your theme folder.
  4. Within your theme function or template, render the whole form or individual fields like so:
      // Whole form
      // print drupal_render($form);
    
      // Individual fields
      print drupal_render($form['fieldgroup_tabs']['group_somegroup']);
      print drupal_render($form['buttons']['submit']);
    
    In general, if you cannot render the whole form at once because it would result in unwanted artifacts (e.g. unwanted group headers from the original form or some such), it is a good approach to render the wanted fields individually first and then call drupal_render($form) within a display:none div element, which will only output the as-of-yet unrendered fields. This will also ensure that any hidden fields are output at all. Mind the possible security implications; if you failed to set #access to FALSE on some fields in the step above, you'd get invisible, yet still updateable fields (the user might use Firebug to get at them). Accordingly, only rely on the display:none trick for visuals, but not as a method of concealing fields.
  5. In the problem statement a requirement to restrict allowed values for a particular field was mentioned. This can be done through the #afterbuild callback installed on the field in question, like so:
      // In hook_form_alter() if branch: 
      $form['field_first']['#after_build'][] =
         'yourmodule_field_first_after_build';
      // ...
    
      function yourmodule_field_first_after_build($element, &$form_state) {
        unset($element['value']['#options']['']);
        return $element;
      } 
    

A word of caution: if you are using a checkbox CCK field in your content type, you might to run into a bug when you set #access to FALSE, which would prevent the custom form from validating. As a workaround, two patches for optionwidgets.module were available at writing time (pick your favorite).

In general, when working with form data structures, remember that the content of these structures varies considerably depending on the current processing stage within the form engine. (Unfortunately, these variations and the separation between API and internals are rather badly documented, which makes it easy to write unstable code.) It's also good to realize that FAPI's "form elements" are hierarchical in nature and act as a mind-numbingly complicated powerful adapter layer between the database fields and HTML form inputs. In particular do realize that a single FAPI form element might expand into multiple form inputs and/or supply values for mutliple database fields.

For further information, I refer you to the following sources:

Drupal 6: Master/detail tabbed navigation

Master/detail navigation is a common scenario in graphical user interfaces for databases. After selecting a master record, the user is given the choice to view all detail records of a particular type that are in some way related to the master record. A natural design choice is to offer tabbed navigation to support this scenario. The first tab contains data from the master record, while subsequent tabs represent different types of associated detail records in tabular format.

The rest of this post explains how to implement the above scenario in Drupal 6. To be more concrete, let's assume that we have a CCK content type Tournament, which is associated by node reference with a CCK content type Result (a tournament contains multiple game results). We also have two views, Tournaments and Results, which list all nodes of the respective type using a table display and furthermore contain exposed filters for all fields - including the tournament title field. The intended final GUI is shown in the following two screenshots (click to enlarge):

Master tab screenshot Detail tab screenshot

The steps to implement the tabbed master/detail navigation are outlined below. Note that the solution does not use the official Views 2 API (which is still quite undocumented at the time of writing). It is based on trial-and-error experimentation - use at your own risk.

  1. Add a Tournament argument to the Results view definition. The argument should be configured with "Display all values if not present", and employ a validator for the node type Tournament. When set up correctly, the results of a particular tournament may be displayed by specifying the view's path followed by the tournament node ID. For example, invoking the view through path /results/12345 would only display results of tournament 12345.

    Note, however, that the Tournament filter and Tournament field, though redundant in the argument-based view, still appear. We will deal with them a bit later.

  2. Add a "local task" type menu item to create a tab for the Tournament node type. We limit the appearance of the tab to Tournament nodes by specifying a custom access callback.

    function bbo_scoring_hook_menu() {
      // ...
      $items['node/%node/bbo_scoring/results'] = array(
        'title' => t('Results'),
        'page callback' => 'bbo_scoring_tournament_results_page',
        'page arguments' => array(1),
        'access callback' => 'bbo_scoring_tournament_results_access',
        'access arguments' => array(1),
        'type' => MENU_LOCAL_TASK,
        'weight' => 100,
      );
      // ...
    }
    
    function bbo_scoring_tournament_results_access($node) {
      return $node->type == 'tournament';
    }
    
  3. Implement the page function so that it outputs the embedded Results view. By the way, we hide the redundant exposed filter for tournament and the tournament field on this page. Regrettably, for some reason this trick does not work in view pre-hooks, which would seem a more universal solution.

    function bbo_scoring_tournament_results_page($node) {
      $view = views_get_view('results');
      $view->display['default']->display_options['filters']['field_tournament_nid']['exposed'] = FALSE;
      unset($view->display['default']->display_options['fields']['field_tournament_nid']);
      $display_id = 'page_1';
      if (!$view || !$view->access($display_id)) {
        return '';
      }
      return $view->preview($display_id, array($node->nid));
    }
    
  4. Alter the filter form's action to lead back to the tab page. Without this step, the default URL of the view would be displayed after applying the filter.

    function bbo_scoring_form_views_exposed_form_alter(&$form, $form_state) {
      if (arg(0) == 'node' && arg(2) == 'bbo_scoring' && arg(3) == 'results') {
        $form['#action'] = '/node/' . arg(1) . '/bbo_scoring/results';
      }
    }
    

Migrating Eclipse update sites to P2

This article describes how to upgrade an Eclipse update site from the classic, pre-P2 layout (site.xml together with plugins and features directories) to the "new and improved" (read: unnecessarily complicated) P2 layout. Considering that P2, the new update manager introduced in Eclipse 3.4, is backwards-compatible with the old layout of update sites (more or less), why bother at all? In my experience, browsing update sites with many plug-ins and features used to be somewhat slow before, but with P2 it has become excruciatingly slow. Fortunately, P2 also supports a new update site format designed to alleviate this problem, that is, to speed up update site browsing. However, as far as I know, no comprehensive documentation of the upgrade path for those of us still using the classic update site layout exists. More confusingly, the article Update Site Optimization (coming from the source?) only tells half of the story. As I found out, following the somewhat outdated instructions listed in there won't leave you with a working update site. Information from different sources had to be pieced together. Hopefully, no longer, if you read on.

Classic update site vs. P2 update site

Let's start by comparing the classic update site layout with the newer layout tailored for P2. I suppose you have a working update site organized in the classic way and want to switch to the new P2 one for reasons mentioned above:

Classic update siteP2 update site
update site = a world-accessible directory on a web serversame as before
update site contains site.xml, which contains a list of versioned features installable from this site update site contains content.jar and artifacts.jar, which together supersede site.xml
subdirectory features contains one JAR file per versioned feature, referenced from site.xml same as before, but the references now originate from content.jar and artifacts.jar
subdirectory plugins contains one JAR file per versioned plug-in, referenced from feature.xml files contained in feature JARs subdirectory plugins contains one .jar.pack.gz file per versioned plug-in, referenced from feature.xml as before, but also from content.jar and artifacts.jar
  update site (optionally?) contains digest.zip - see description below

While the syntax of feature.xml and site.xml is easy to understand and these files are easy to generate/process by your own tools if need be, the new content.jar and artifacts.jar leave no such hopes. You are advised to treat them as P2's private mess (found inside: fat, obscure XML documents). As you will see next, both these JAR files can and should be generated from a slightly modified version of site.xml.

The plug-in .jar.pack.gz files are also generated - each from the original plug-in JAR found of the classic update site. They are essentially JARs recursively compressed with the pack200 tool first introduced in Java 1.5.

The old site.xml and old plug-in JAR files are not ever accessed by the P2 update manager, but if you wish to stay backwards-compatible, you should keep them around in your update site as well.

If you read the Update Site Optimization article, you might be wondering whether a "digest" file (digest.zip) is also needed, where to put it, and what for. I have never observed P2 trying to access this file and suspect that it has been superseded by the content.jar and artifacts.jar duo (which the original article fails to mention). However, it is worth noting that the official Eclipse Ganymede update site does contain a digest.zip in the update site directory and an attribute digestURL="http://download.eclipse.org/releases/ganymede/" on the site element in site.xml (the value of this attribute points to the update site, i.e. location of digest.zip). We'll see next how to generate digest.zip just in case.

How to upgrade to a P2 update site

In essence, the following steps are required:

  1. Add a new attribute pack200="true" to the site element in site.xml.
  2. Add a new attribute digestURL="http://your/update/site/url/" to the site element in site.xml.
  3. Ensure that each of your feature JARs in the features directory contains a feature.properties file (which may be empty). (Here is why.)
  4. Generate a .jar.pack.gz file from each plug-in JAR file in the plugins subdirectory.
  5. Generate digest.zip based on the classic update site (including site.xml).
  6. Generate content.jar and artifacts.jar based on the classic update site (including site.xml).

Steps 1, 2, 3 do not require any sophistication and thus won't be further elaborated. Steps 4 and 5 are (surprisingly) intertwined, as described below. Step 6 is also described in the last section.

How to generate .jar.pack.gz files (Step 4) and digest.zip (Step 5)

The generating .jar.pack.gz files step actually consists of two parts:

  1. For each JAR file in question, "condition" or "repack" the JAR file to prepare it for the second part.
  2. Generate .jar.pack.gz files from a set of conditioned JAR files. Also generate digest.zip

In order to condition (repack) a JAR file, run:

$JAVA_HOME/bin/java \
    -jar $launcher \
    -application org.eclipse.update.core.siteOptimizer \
    -jarProcessor -verbose -processAll -repack -outputDir $output_dir/plugins \
    $input_jar_file

The above invocation contains some variables, to be replaced as follows:

JAVA_HOMEpath to where Java 1.5 (or newer) is installed
launcherpath to plugins/org.eclipse.equinox.launcher_1.0.101.R34x_v20080819.jar from your Eclipse 3.4 (or newer) installation (the version number in the JAR file name may vary)
output_dirpath to where the conditioned JAR file should be written - the plugins directory of the upgraded update site
input_filepath to the input plug-in JAR file

Sadly, you will have to run the above tool for each JAR file individually. Observe that the conditioned JARs contain META-INF/eclipse.inf, absent in unconditioned ones. Finally, copy site.xml and the features subdirectory of your original update site to $output_dir. This completes part 1.

For part 2, run the following command. Unlike part 1, this processes all the conditioned JARs it can find through site.xml:

$JAVA_HOME/bin/java \
    -jar $launcher \
    -application org.eclipse.update.core.siteOptimizer \
    -digestBuilder \
    -digestOutputDir=$output_dir \
    -siteXML=$output_dir/site.xml \\
    -jarProcessor -pack -outputDir $output_dir $output_dir

You should now have $output_dir/plugins full of conditioned JARs and a corresponding .pack.jar.gz for each of them. You should also have $output_dir/digest.zip. If not, maybe you forgot to take care of feature.properties in Step 3 mentioned earlier.

How to generate content.jar and artifacts.jar (Step 6)

Here is how (set a human readable $project_name first):

$JAVA_HOME/bin/java -jar $launcher \
    -application org.eclipse.equinox.p2.metadata.generator.EclipseGenerator \
    -updateSite ${output_dir}/ \
    -site file:${output_dir}/site.xml \
    -metadataRepository file:${output_dir}/ \
    -metadataRepositoryName "${project_name} Update Site" \
    -artifactRepository file:${output_dir}/ \
    -artifactRepositoryName "${project_name} Artifacts" \
    -compress \
    -reusePack200Files \
    -noDefaultIUs \
    -vmargs -Xmx256M

The final result

Here is an example P2-upgraded, backwards compatible update site with 1 feature version:

.
|-- site.xml
|-- artifacts.jar
|-- content.jar
|-- digest.zip
|-- features
|   `-- org.epic.feature.main_0.6.34.jar
`-- plugins
    |-- org.epic.debug_0.6.27.jar
    |-- org.epic.debug_0.6.27.jar.pack.gz
    |-- org.epic.doc_0.6.2.jar
    |-- org.epic.doc_0.6.2.jar.pack.gz
    |-- org.epic.lib_0.6.1.jar
    |-- org.epic.lib_0.6.1.jar.pack.gz
    |-- org.epic.perleditor_0.6.23.jar
    |-- org.epic.perleditor_0.6.23.jar.pack.gz
    |-- org.epic.regexp_0.6.1.jar
    |-- org.epic.regexp_0.6.1.jar.pack.gz
    |-- org.epic.source_0.6.34.jar
    `-- org.epic.source_0.6.34.jar.pack.gz

site.xml contains the two new attributes, pack200 and digestURL.

P2 will access content.jar, artifacts.jar, org.epic.feature.main_0.6.34.jar and the plug-in .jar.pack.gz files.

The classic update manager will access site.xml, org.epic.feature.main_0.6.34.jar and the plug-in .jar files.

Automation of the above process using Ant scripts, predefined Eclipse tasks or some such is left as an exercise for the reader. Also note another post with hints on how to debug helper applications started by the Eclipse launcher.

Debugging mod_perl with EPIC

This article describes how to debug mod_perl applications using EPIC and explains some of the current shortcomings. The functionality was tested with EPIC version 0.6.33, mod_perl version 2.0.4, and Apache::DB version 0.14. It is important to note that while the EPIC debugger frontend is fairly stable and debugging mod_perl applications in principle is not very different from debugging remote Perl scripts, at the time of writing there is little experience with EPIC and mod_perl in particular. Furthermore, the procedures described here will appear a little kludgy, as they overload the already existing remote debugging functionality to serve mod_perl debugging. You should expect to invest some time into troubleshooting before you arrive at a working configuration.

Set up Apache::DB without EPIC

The first step is to make sure that the interactive command-line debugger works as expected (without EPIC). You can find a more detailed description of Apache::DB in CPAN and in mod_perl documentation. In the following, I assume that you have an example Apache site configured as follows:

Alias /modperl/ /home/jpl/modperl/
    
PerlModule ModPerl::Registry
<IfDefine PERLDB>
    PerlRequire /home/jpl/modperl/db.pl
</IfDefine>
PerlPostConfigRequire /home/jpl/modperl/startup.pl

<Location /modperl/>
    SetHandler perl-script
    PerlHandler ModPerl::Registry
    <IfDefine PERLDB>
        PerlFixupHandler Apache::DB
    </IfDefine>
    Options +ExecCGI
    PerlSendHeader On
    allow from 127.0.0.1
    deny from all
</Location>

The file /home/jpl/modperl/db.pl referenced in the above configuration contains:

use APR::Pool (); 
use Apache::DB (); 
Apache::DB->init();

The file /home/jpl/modperl/startup.pl contains whatever is needed to initialize the environment for your mod_perl scripts. For example, we should define @INC there:

use lib qw(/home/jpl/modperl/lib);
1;

Apache::DB requires that Apache is started with the -X (single-process) option. I use apache2ctl -k start -X -DPERLDB, which leaves the Apache process running in the foreground and enables the debugging-related directives by defining the PERLDB token. Whenever a script /modperl/*.pl is loaded in the browser, the terminal running apache2ctl gets an interactive debugger prompt. So far, this is all more or less standard mod_perl configuration, no EPIC involved.

Create a "Perl Remote" debug configuration in EPIC

In Eclipse, open the dialog Run/Debug Configurations... to create a launch configuration of type "Perl Remote" with the following settings:

ProjectThe Perl project containing your scripts and modules.
File to executeIt doesn't matter - choose any script from the project.
Local Host IPIt doesn't matter - enter 127.0.0.1.
Target Host Project Installation PathEnter the file system path on the remote machine under which your Perl project is deployed (eg. /home/jpl/modperl). If you are running Apache on the same machine as Eclipse, enter the path displayed in Project Properties as "Location". If you notice that breakpoints are ignored, this is a key setting to tweak, more on that later.
PortThis is the port on which EPIC is going to listen for a connection from the debugger running on the target Apache host. Enter a port number that is not firewalled (in case of doubt, test connectivity with netcat or the like!). Default is 5000.
Capture OutputIt has to be unchecked, as you don't want EPIC to attempt hijacking of STDOUT and STDERR of your scripts!
Create Debug PackageIt should be checked on the first invocation (see below). It may be unchecked later on.
Debug Package File PathEnter the path to a new file that EPIC will create locally when the debugging session is started, for example: /tmp/epicdb.zip. This ZIP file will contain a custom version of perl5db.pl, which must be used instead of the Apache/perl5db.pl in order to support setting breakpoints (it refers to EPIC's epic_breakpoints.pm module, also included in the ZIP file). More on that in the next section.

Run the debug configuration. You should see a Remote Perl Script item in the Debug view (switch to the Debug perspective if necessary). Also, on disk, the ZIP archive mentioned above should now be available. Terminate the debugger session now, as we are not done with adjusting the configuration yet.

Install the EPIC helper modules on the target host

Copy the ZIP archive created in the previous step to the Apache host, create a new directory there and unpack the ZIP archive into it. You will notice that all project files are contained in it. However, you should only leave *epic*.pm and perl5db.pl in the target directory - remove all other files and subdirectories.

Redirect Apache::DB to contact EPIC

Edit your Apache configuration like so:

Alias /modperl/ /home/jpl/modperl/

PerlModule ModPerl::Registry
<IfDefine PERLDB>
    PerlSetEnv PERLDB_OPTS "RemotePort=127.0.0.1:5000 DumpReused ReadLine=0 PrintRet=0"
    PerlSetEnv PERL5DB "BEGIN { $DB::CreateTTY=0; require '/path/to/EPIC/provided/perl5db.pl'; }"
    PerlRequire /home/jpl/modperl/db.pl
</IfDefine>
PerlPostConfigRequire /home/jpl/modperl/startup.pl

<Location /modperl/>
    SetHandler perl-script
    PerlHandler ModPerl::Registry
    <IfDefine PERLDB>
        PerlFixupHandler Apache::DB
    </IfDefine>
    Options +ExecCGI
    PerlSendHeader On
    allow from 127.0.0.1
    deny from all
</Location>

/path/to/EPIC/provided/perl5db.pl should be adjusted to point to where you unpacked the ZIP archive.

RemotePort should be adjusted to include the IP address and port where Eclipse/EPIC are listening for debugger connections.

Edit db.pl to include EPIC helper modules

Add the directory in which perl5db.pl and epic_breakpoints.pm are located to @INC. I suggest that you edit the file db.pl, which was mentioned above:

use lib qw(/path/to/EPIC/provided);
use APR::Pool ();
use Apache::DB ();
Apache::DB->init();

Test debugging without breakpoints

In EPIC Preferences, enable the "suspend at first line" option. Launch the Remote Perl debug configuration created earlier. On the target host, restart Apache in single-process mode. Load one of your scripts in the browser. You should see in EPIC that the debugger suspends in one of the handler modules (eg. Apache::Registry). From here on, you could step through the code to eventually enter your scripts and modules. Resume execution, try reloading the script or loading another script. EPIC should suspend again. If this doesn't work, there is no sense of going further. For troubleshooting, you can enable Perl debugger console in EPIC Preferences.

The proper way to terminate the debugging session is stop the Apache process. EPIC should detect it and terminate the session automatically. If you attempt terminating the session from EPIC, you may get into trouble and may have to kill the Apache process with kill -9 later. In case of problems, make sure that you have no runaway Apache processes. The relative order of starting the EPIC debugging session and the Apache process doesn't matter. Obviously, before you run a script to be debugged by making a request in the browser, both the EPIC debugging session and the Apache process must be running.

Test breakpoints in your own modules

Do not set breakpoints in scripts just yet. Set a breakpoint in one of your own modules and check whether EPIC properly suspends on it. If the breakpoint is ignored, check the file workspace/metadata/.log for warnings like "Could not map local path ... to a remote path". If they appear for the file in which you set breakpoints, you should try adjusting "Target Host Project Installation Path" in the Perl Remote launch configuration. Watch out for possible path variations due to symlinks. Essentially, the path entered in the launch configuration should correspond to the path used by Perl internally to refer to files on the remote host. You can see those internal paths if you enable debugger console in EPIC preferences and step through your code. For example, I had trouble with breakpoints being ignored due to the Perl debugger referring to files as /home/jpl/modperl/hello.pl, while the location displayed in file properties in EPIC was /mnt/data/home/jpl/modperl/hello.pl. The project's include path configured in EPIC and the actual include path on the remote host are also used in mappings. As a general rule, the less the remote host differs from the development machine on which EPIC is running, the better.

Test breakpoints in scripts

Breakpoints in scripts don't work out-of-the-box with mod_perl and EPIC. The reason is that EPIC gets no opportunity to actually set breakpoints in scripts, which are loaded by mod_perl in a custom way and converted dynamically into packages. Fortunately, there is a little trick which allows suspending in scripts. Edit the target script to include $DB::single = 1; immediately before the line on which you want to suspend. Also don't forget to set an EPIC breakpoint as usual, like so:

$DB::single = 1;
print "suspend before print\n"; # set EPIC breakpoint on this line!

The debugger should now suspend on the correct line.